Home > Net >  Feeding 2D tensor into RRN/LSTM layer
Feeding 2D tensor into RRN/LSTM layer

Time:07-05

I have the following dataset:

for x, y in ds_train.take(1):
    print(i, j)

Output:

tf.Tensor(
[[-0.5         0.8660254  -0.67931056 -0.7338509   0.          0.
   0.          0.          0.          0.          1.          0.
   0.          0.          0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.        ]
...
 [-0.5         0.8660254  -0.9754862  -0.22006041  0.          0.
   0.          0.          0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.        ]],
shape=(36, 24), dtype=float32) tf.Tensor([0 0 0 0 0 0 1 0], shape=(8,), dtype=int64)

I want to feed this data into RNN Layer:

model = tf.keras.models.Sequential([
  tf.keras.layers.SimpleRNN(40, input_shape=(36,24)),
  tf.keras.layers.Dense(8),
])

model.compile(loss='mae', optimizer='adam')
history = model.fit(ds_train, epochs=100)

I get the following error:

ValueError: Input 0 of layer "sequential_12" is incompatible with the layer:
expected shape=(None, 36, 24), found shape=(None, 24)

I don't understand why found shape is equal to (None, 24). We can see above, the shape of the tensor in the dataset is equal to (36, 24). What is the proper way to feed such data (where 0-axis is a time, e.g. 36 timestamps and 24 features) into RNN/LSTM layer?

UPDATE: As @AloneTogether pointed, I need to batch my data first. In this case, I get another error:

    File "/home/mykolazotko/miniconda3/envs/tf/lib/python3.9/site-packages/keras/losses.py", line 1455, in mean_absolute_error
      return backend.mean(tf.abs(y_pred - y_true), axis=-1)
Node: 'mean_absolute_error/sub'
required broadcastable shapes
     [[{{node mean_absolute_error/sub}}]] [Op:__inference_train_function_32704]

It looks like the loss function doesn't like my target tensor with the shape (8,).

CodePudding user response:

Maybe try changing your input_shape, which currently only considers the features dimension:

tf.keras.layers.SimpleRNN(40, input_shape=(36, 24))

And don't forget to set a batch size on your dataset:

ds_train = ds_train.batch(your_batch_size)

Otherwise, 36 will be misunderstood as your batch dimension.

Here is a working example:

import tensorflow as tf

x = tf.random.normal((50, 36, 24))
y = tf.random.uniform((50, 8), dtype=tf.int32, maxval=2)

model = tf.keras.models.Sequential([
  tf.keras.layers.SimpleRNN(40, input_shape=(36,24)),
  tf.keras.layers.Dense(8),
])

ds_train = tf.data.Dataset.from_tensor_slices((x, y)).batch(2)

model.compile(loss='mae', optimizer='adam')
history = model.fit(ds_train, epochs=100)

Make sure your labels really do always have 8 elements for each sample.

  • Related