Home > Net >  Time Series data to fit for ConvLSTM
Time Series data to fit for ConvLSTM

Time:08-03

I used stock data with 4057 samples, made it into 28 time steps, with 25 features.

TrainX shape: (4057, 28, 25)

The Target consists of 5 categories of interger

[0,1,2,3,4]

and reshape into:

trainX_reshape= trainX.reshape(4057,1, 28,25,1)
testX_reshape= testX.reshape(1334,1, 28,25,1)

trying to fit the model:

seq =Sequential([
    ConvLSTM2D(filters=40, kernel_size=(3, 3),input_shape=(1, 28, 25, 1),padding='same', return_sequences=True),
    BatchNormalization(),
    ConvLSTM2D(filters=40, kernel_size=(3, 3),padding='same', return_sequences=True),
    BatchNormalization(),
    ConvLSTM2D(filters=40, kernel_size=(3, 3),padding='same', return_sequences=True),
    BatchNormalization(),
    ConvLSTM2D(filters=40, kernel_size=(3, 3),padding='same', return_sequences=True),
    BatchNormalization(),
    Conv3D(filters=5, kernel_size=(3, 3, 3),activation='sigmoid',padding='same', data_format='channels_last')
])

compile with

seq.compile(loss='sparse_categorical_crossentropy', optimizer='rmsprop')

history = seq.fit(trainX_reshape, trainY, epochs=10,
               batch_size= 128, shuffle=False, verbose = 1,
               validation_data=(testX_reshape, testY),
               # validation_split=0.2)

and it gives ERROR:

InvalidArgumentError: Graph execution error:

How to fix it? Ive tried many methods, but had no clue.

the code and data are at: https://drive.google.com/drive/folders/1WDa_CUO1Mr7wZTqE3wHsR0Tp_3NRMcZ6?usp=sharing

works on colab

CodePudding user response:

Your model's output does not make any sense, if you are working with sparse integer labels. It is 5D and your labels are 2D (including batch size). Try:

seq =Sequential([
    ConvLSTM2D(filters=40, kernel_size=(3, 3),input_shape=(1, 28, 25, 1),padding='same', return_sequences=True),
    BatchNormalization(),
    ConvLSTM2D(filters=40, kernel_size=(3, 3),padding='same', return_sequences=True),
    BatchNormalization(),
    ConvLSTM2D(filters=40, kernel_size=(3, 3),padding='same', return_sequences=True),
    BatchNormalization(),
    ConvLSTM2D(filters=40, kernel_size=(3, 3),padding='same', return_sequences=True),
    BatchNormalization(),
    Conv3D(filters=5, kernel_size=(3, 3, 3),padding='same', data_format='channels_last'),
    GlobalMaxPooling3D(),
    Dense(5, activation='softmax')
])

Note, in your solution, the output of your model is reshaped to a 2D tensor to be compatible with sparse_categorical_crossentropy, but this reshaping is destroying the batch size. So you get the logits shape [89600,5] and labels shape [128], although it should be [128, 5] and [128]. If your model's output were, for example, [128, 1, 1, 1, 5], it would probably be possible.

  • Related