Home > database >  Input 0 of layer "bidirectional_2" is incompatible with the layer: expected ndim=3, found
Input 0 of layer "bidirectional_2" is incompatible with the layer: expected ndim=3, found

Time:04-06

I am trying to classify text with bi-lstm but while I run model.predict on new dataset it is giving me this error: Input 0 of layer "bidirectional_2" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 100) Shape of my training data is:(39780, 2) Shape of my testing data is: (28619, 2)

model = Sequential()
model.add(Embedding(len(word_index)   1, embed_size, weights=[embedding_matrix]))
model.add(Bidirectional(LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1)))
model.add(Bidirectional(LSTM(30,return_sequences=True)))
model.add(GlobalMaxPool1D())
model.add(Dense(50, activation="relu"))
model.add(Dropout(0.1))
model.add(Dense(1, activation="sigmoid"))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

history=model.fit(X_train, Y_train, batch_size=64, epochs=5)
y_pred = model.predict([X_test], batch_size=26, verbose=1)

CodePudding user response:

Try something like this (tested on TF 2.0 and TF 2.8):

import tensorflow as tf

vocab_size = 50
embedding_size = 100
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(vocab_size, embedding_size, input_length=2))
model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1)))
model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(30,return_sequences=True)))
model.add(tf.keras.layers.GlobalMaxPool1D())
model.add(tf.keras.layers.Dense(50, activation="relu"))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(1, activation="sigmoid"))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
X_train = tf.random.uniform((39780, 2), maxval=vocab_size, dtype=tf.int32)
Y_train = tf.random.uniform((39780, 1), maxval=2, dtype=tf.int32)
X_test = tf.random.uniform((28619, 2), maxval=vocab_size, dtype=tf.int32)

history=model.fit(X_train, Y_train, batch_size=64, epochs=1)
y_pred = model.predict([X_test], batch_size=26, verbose=1)
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 embedding_2 (Embedding)     (None, 2, 100)            5000      
                                                                 
 bidirectional_2 (Bidirectio  (None, 2, 100)           60400     
 nal)                                                            
                                                                 
 bidirectional_3 (Bidirectio  (None, 2, 60)            31440     
 nal)                                                            
                                                                 
 global_max_pooling1d_1 (Glo  (None, 60)               0         
 balMaxPooling1D)                                                
                                                                 
 dense_2 (Dense)             (None, 50)                3050      
                                                                 
 dropout_1 (Dropout)         (None, 50)                0         
                                                                 
 dense_3 (Dense)             (None, 1)                 51        
                                                                 
=================================================================
Total params: 99,941
Trainable params: 99,941
Non-trainable params: 0
_________________________________________________________________
  • Related