Home > OS >  LSTM, multi-variate, multi-feature in pytorch
LSTM, multi-variate, multi-feature in pytorch

Time:02-08

I'm having trouble understanding the format of data for an LSTM in pytorch. Lets say i have a CSV file with 4 features, laid out in timestamps one after the other ( a classic time series)

time1 feature1 feature2 feature3 feature4
time2 feature1 feature2 feature3 feature4
time3 feature1 feature2 feature3 feature4
time4 feature1 feature2 feature3 feature4,  label

However, this entire set of 4 sequences only has a single label. The thing we're trying to classify started at time1, but we don't know how to label it until time 4.
My question is, can a typical pytorch LSTM support this? All of the tutorials i've read, watched, walked through, involve looking at a time sequence of a single feature, or a word model, which is still a dataset with a single dimension.

If it can support it, does the data need to be flattened in some way?

Pytorch's LSTM reference states:

input: tensor of shape (L,N,Hin)(L, N, H_{in})(L,N,Hin​) when batch_first=False or (N,L,Hin)(N, L, H_{in})(N,L,Hin​) when batch_first=True containing the features of the input sequence. The input can also be a packed variable length sequence.

Does this mean that it cannot support any input that contains multiple sequences? Or is there another name for this?

I'm really lost here, and could use any advice, pointers, help, so on. Maybe some disambiguation too.

I've posted a couple times here but gotten no responses at all. If this post is misplaced, could someone kindly direct me towards the correct place to post it?

Edit: Following Daniel's advice, do i understand correctly that the four features should be put together like this:

[(feature1, feature2, feature3, feature4, feature1, feature2, feature3, feature4, feature1, feature2, feature3, feature4, feature1, feature2, feature3, feature4), label]  when given to the LSTM?

If that's correct, is the input size (16) in this case?

Finally, I was under the impression that the output of the LSTM Would be the predicted label. Do I have that wrong?

CodePudding user response:

As you show, the LSTM layer's input size is (batch_size, Sequence_length, feature_size). This means that the feature is assumed to be a 1D vector.

So to use it in your case you need to stack your four features into one vector (if they are more then 1D themselves then flatten them first) and use that vector as the layer's input.

Regarding the label. It is defiantly supported to have a label only after a few iterations. The LSTM will output a sequence with the same length as the input sequence, but when training the LSTM you can choose to use any part of that sequence in the loss function. In your case you will want to use the last element only.

  •  Tags:  
  • Related