Home > database >  Strange error: TypeError: forward() takes 2 positional arguments but 3 were given
Strange error: TypeError: forward() takes 2 positional arguments but 3 were given

Time:01-13

I am trying to train a VAE model. However I keep getting the following error:

Traceback (most recent call last):
  File "/Users/devcharaf/Documents/Uni/UdeM/Internship/newGNN/app/train.py", line 28, in <module>
    trained_model, loss = train(model, train_data, optimizer, num_epochs=1000, model_type="VAE")
  File "/Users/devcharaf/Documents/Uni/UdeM/Internship/newGNN/app/utils.py", line 322, in train
    recon_x, mean, logvar = model(data["x"], data["edge_index"])
  File "/opt/anaconda3/envs/pygeometric/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/devcharaf/Documents/Uni/UdeM/Internship/newGNN/app/model.py", line 47, in forward
    mean, logvar = self.encoder_forward(x, edge_index)
  File "/Users/devcharaf/Documents/Uni/UdeM/Internship/newGNN/app/model.py", line 53, in encoder_forward
    x = self.encoder(x, edge_index)
  File "/opt/anaconda3/envs/pygeometric/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
TypeError: forward() takes 2 positional arguments but 3 were given

I tried many combinations of arguments but nothing seems to work. Can you spot the error? As you can see in the following model, the forward function was called in the train() function with only two arguments and not 3. Also the error hints at the line self.encoder(x, edge_index). But when I try to remove one of these arguments, I get an error saying not enough arguments. Here is my model:

# Variational Auto Encoder
class VAE(nn.Module):
    def __init__(self, in_dim, hidden_dim, latent_dim):
        super(VAE, self).__init__()
        self.in_dim = in_dim
        self.hidden_dim = hidden_dim
        self.latent_dim = latent_dim

        self.encoder = nn.Sequential(
            gnn.GCNConv(in_dim, hidden_dim),
            nn.ReLU(),
            gnn.GCNConv(hidden_dim, hidden_dim),
            nn.ReLU()
        )
        self.fc_mean = nn.Linear(hidden_dim, latent_dim)
        self.fc_logvar = nn.Linear(hidden_dim, latent_dim)
        self.decoder = nn.Sequential(
            gnn.GCNConv(latent_dim, hidden_dim),
            nn.ReLU(),
            gnn.GCNConv(hidden_dim, in_dim),
            nn.Sigmoid()
        )

    def forward(self, x, edge_index):
        mean, logvar = self.encoder_forward(x, edge_index)
        z = self.reparameterize(mean, logvar)
        recon_x = self.decoder_forward(z, edge_index)
        return recon_x, mean, logvar

    def encoder_forward(self, x, edge_index):
        x = self.encoder(x, edge_index)
        mean = self.fc_mean(x)
        logvar = self.fc_logvar(x)
        return mean, logvar

    def decoder_forward(self, x, edge_index):
        x = self.decoder(x, edge_index)
        return x

    def reparameterize(self, mean, logvar):
        std = torch.exp(0.5 * logvar)
        eps = torch.randn_like(std)
        return eps.mul(std).add_(mean)

and here is my train function:

def train(model, data, optimizer, num_epochs):
    model.train()
    criterion = nn.MSELoss()
    beta = 1
    epoch_losses = []
    for epoch in range(num_epochs):
        optimizer.zero_grad()

        # Output of the model
        recon_x, mean, logvar = model(data["x"], data["edge_index"])
        output = recon_x
        kl_loss = -0.5 * torch.mean(1   logvar - mean.pow(2) - logvar.exp())
        kl_loss *= beta
        kl_loss.backward(retain_graph=True)

        target = torch.zeros(data["num_nodes"], 1)

        # Loss computation
        loss = criterion(output, target)
        loss  = kl_loss

        loss.backward()
        optimizer.step()
        epoch_losses.append(loss.item())
    
    return model, epoch_losses

Here is the code I am using to train the model:

model = VAE(in_dim=10, hidden_dim=32, latent_dim=8)

# Create an instance of the optimizer.
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

# Train the model for a specified number of epochs.
trained_model, loss = train(model, train_data, optimizer, num_epochs=1000)

CodePudding user response:

    def encoder_forward(self, x, edge_index):
        x = self.encoder(x, edge_index) # 3 inputs self, x, edge_index

encoder is just a nn.Sequential. For Sequentials the forward is defined as follows - which can only take two arguments - and should be the root of the error.

def forward(self, input):
        for module in self:
            input = module(input)
        return input

While a bit of a hassle to solve your problem write your forward method for encoder and decoder layer wise.

I dont know about the special layers you use and what output they produce so at best you can do something like:

for module in self.encoder:
   x = module(x, edge_index) 

Probably you need an extra if statement if you hit the ReLU.


Sureway would be to do the forward pass manually

def __init__(self, in_dim, hidden_dim, latent_dim):
    ...
    self.enc_conv1 = gnn.GCNConv(in_dim, hidden_dim)
    self.enc_relu1 = nn.ReLU()
    self.enc_conv2 = gnn.GCNConv(hidden_dim, hidden_dim)
    self.enc_relu2 = nn.ReLU()
    ...

def encoder_forward(self, x, edge_index):
        x = self.enc_conv1(x, edge_index) # I dont know how these layer work and what output they produce
        x = self.enc_relu1(x)
        ... 
        logvar = self.fc_logvar(x)
        return mean, logvar

CodePudding user response:

You are simply providing to many inputs into a function. In your case it looks like you don't have any kwargs defined but are using one.

trained_model, loss = train(model, train_data, optimizer, num_epochs=1000)

should be

trained_model, loss = train(model, train_data, optimizer, 1000)

or

you need to change the declaration of the function to take **kwargs

def train(model, data, optimizer, num_epochs = 1000):
  • Related