Home > Enterprise >  TypeError: linear(): argument 'input' (position 1) must be Tensor, not Dropout pytorch
TypeError: linear(): argument 'input' (position 1) must be Tensor, not Dropout pytorch

Time:04-26

I have an auto encoder in torch and I want to add a dropout layer in the decoder. ( I am not sure where I should add the dropout). In the following I added a little example of the input data and the decorder function. Honestly, I don't know what I should do to fix the error. Could you please help me with that?

d_input   = torch.nn.Conv1d(1, 33, 10, stride=10)
mu_d      = nn.Linear(1485, 28)
log_var_d = nn.Linear(1485, 28)

def decode(self, z, y):

        indata     = torch.cat((z,y), 1) #shape: [batchsize, 451
        indata     = torch.reshape(indata, (-1, 1, 451))
        hidden     = torch.flatten(relu(d_input(indata)), start_dim = 1) #shape [batch_size, 1485]
        hidden     = nn.Dropout(p=0.5) 
        par_mu     = self.mu_d(hidden)
        par_log_var= self.log_var_d(hidden)
        return par_mu, par_log_var

CodePudding user response:

torch.nn.Dropout is a module. You need to instantiate it before you can pass a variable through it.

d_input   = torch.nn.Conv1d(1, 33, 10, stride=10)
mu_d      = nn.Linear(1485, 28)
log_var_d = nn.Linear(1485, 28)
dropout = nn.Dropout(p=0.5)

def decode(self, z, y):

        indata     = torch.cat((z,y), 1) #shape: [batchsize, 451
        indata     = torch.reshape(indata, (-1, 1, 451))
        hidden     = torch.flatten(relu(d_input(indata)), start_dim = 1) #shape [batch_size, 1485]
        hidden     = dropout(hidden)
        par_mu     = self.mu_d(hidden)
        par_log_var= self.log_var_d(hidden)
        return par_mu, par_log_var
  • Related