Home > Enterprise >  How to add multiple layers to an RNN module for sentiment analysis? Pytorch
How to add multiple layers to an RNN module for sentiment analysis? Pytorch

Time:04-19

I am trying to create a sentiment analysis model with Pytorch (newbie)

import torch.nn as nn

class RNN(nn.Module):
 def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim, dropout):
    super().__init__() #to call the functions in the superclass
    self.embedding = nn.Embedding(input_dim, embedding_dim) #Embedding layer to create dense vector instead of sparse matrix
    self.rnn = nn.RNN(embedding_dim, hidden_dim) 
    self.fc = nn.Linear(hidden_dim, output_dim)
    self.dropout = nn.Dropout(dropout)
    
def forward(self, text):
    embedded = self.embedding(text)
    output, hidden = self.rnn(embedded)   
    hidden = self.dropout(hidden[-1,:,:])
    nn.Sigmoid()
    return self.fc(hidden)

However, the accuracy is below 50% and I would like to add an extra layer, maybe another linear before feeding it to the last linear to get the prediction. What kind of layers can I add after the RNN and before the last Linear? and also what should I feed it with? I have tried simply adding another

output, hidden= self.fc(hidden)

but I get

ValueError: too many values to unpack (expected 2)

Which I believe is because the output of the previous layer with activation and dropout is different. The help is greatly appreciated.

Thanks

CodePudding user response:

You were very close, just change your forward call to:

import torch.nn.functional as F
class model_RNN(nn.Module):
 def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim, dropout):
    super().__init__() #to call the functions in the superclass
    self.embedding = nn.Embedding(input_dim, embedding_dim) #Embedding layer to create dense vector instead of sparse matrix
    self.rnn = nn.RNN(embedding_dim, hidden_dim) 
    self.hidden_fc = nn.Linear(hidden_dim,hidden_dim)
    self.out_fc = nn.Linear(hidden_dim, output_dim)
    self.dropout = nn.Dropout(dropout)
    
def forward(self, text):
    embedded = self.embedding(text)
    output, hidden = self.rnn(embedded)   
    hidden = self.dropout(hidden[-1,:,:])
    hidden = F.relu(torch.self.hidden_fc(hidden))
    return self.out_fc(hidden)

Just a note, calling nn.Sigmoid() won't do anything to your model output because it will just create a sigmoid layer but won't call it on your data. What you want is probably torch.sigmoid(self.fc(hidden)). Although I would say it's not recommended to use an output activation because some common loss functions require the raw logits. Make sure you apply the sigmoid after the model call in eval mode though!

  • Related