Home > Back-end >  How to add a dropout layer in LSTM to avoid overfitting
How to add a dropout layer in LSTM to avoid overfitting

Time:10-18

While implementing a hybrid quantum LSTM model, the model is overfitting and thus giving low accuracy. I tried setting dropout = 1 in nn.LSTM but no improvement. I have used a single hidden layer. How do I add the dropout layer to reduce overfitting?

Model parameters:

input_dim = 16
hidden_dim = 100
layer_dim = 1
output_dim = 1

Model class:

class LSTMModel(nn.Module):
    def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
        super(LSTMModel, self).__init__()
        self.hidden_dim = hidden_dim
        
        self.layer_dim = layer_dim

        self.lstm = nn.LSTM(input_dim, hidden_dim, layer_dim, dropout=1, batch_first=True, )
      
        self.fc = nn.Linear(hidden_dim, output_dim)
        self.hybrid = Hybrid(qiskit.Aer.get_backend('qasm_simulator'), 100, np.pi / 2)

    def forward(self, x):
        h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_()

        c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_()
        
        x, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach()))
       
        x = self.fc(x[:, -1, :]) 
        x = self.hybrid(x)
        return T.cat((x, 1 - x), -1)    

CodePudding user response:

Pytorch's LSTM layer takes the dropout parameter as the probability of the layer having its nodes zeroed out. When you pass 1, it will zero out the whole layer. I assume you meant to make it a conventional value such as 0.3 or 0.5.

As @ayandas says above, too, it applies dropout to each layer except the last (see the link above), so it won't work for a single-layer LSTM. You can always apply your own dropout using nn.dropout at the output of your LSTM layers if you wish.

  • Related