Home > Software design >  Using the LSTM layer in encoder in Pytorch
Using the LSTM layer in encoder in Pytorch

Time:04-26

I want to build an autoencoder with LSTM layers. But, at the first step of the encoder, I got an error. Could you please help me with that? Here is the model which I tried to build:

import numpy 
import torch.nn as nn

r_input    = torch.nn.LSTM(1, 1, 28) 
activation  = nn.functional.relu
mu_r      = nn.Linear(22, 6)
log_var_r = nn.Linear(22, 6)

y = np.random.rand(1, 1, 28)
def encode_r(y):
    y         = torch.reshape(y, (-1, 1, 28)) # torch.Size([batch_size, 1, 28])
    hidden    = torch.flatten(activation(r_input(y)), start_dim = 1)       
    z_mu      = mu_r(hidden)
    z_log_var = log_var_r(hidden)
    return z_mu, z_log_var 

But I got this error in my code:

RuntimeError: input.size(-1) must be equal to input_size. Expected 1, got 28. 

CodePudding user response:

You're not creating the layer in the correct way. torch.nn.LSTM requires input_size as the first argument, but your tensor has a dimension of 28. It seems that you want the encoder to output a tensor with a dimension of 22. You're also passing the batch as the first dimension, so you need to include batch_first=True as an argument.

r_input = torch.nn.LSTM(28, 22, batch_first=True)

This should work for your specific setup. You should also note that LSTM returns 2 items, the first one is the one you want to use.

hidden = torch.flatten(activation(r_input(y)[0]), start_dim=1)

Please read the declaration on the official wiki for more information.

  • Related