I'm trying to train a Keras model where a singular input is a normalized array of floats of length 512. I currently have 539 of these inputs in my training data, but the following error is produced as soon as the predict()
method is called:
Input 0 of layer "dense" is incompatible with the layer: expected min_ndim=2, found ndim=1. Full shape received: (32,)
And here is the code, note I've tried passing different shapes to the Input()
based on suggestions I found on other posts such as Input(Shape=(1,))
and Input(Shape=(512,))
but neither of these worked.
X = np.array(X) #(539, 512)
y = np.array(y) #(539,)
model = Sequential()
model.add(Input(shape=(None, 512)))
model.add(Dense(64, activation=tf.nn.relu))
model.add(Dense(1, activation=tf.nn.sigmoid))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics='accuracy')
model.fit(X, y, batch_size=4, validation_split=0.1, epochs=5)
prediction = model.predict([X[0]])
CodePudding user response:
Well, the model is expecting you the batch_size also try this
model = Sequential()
model.add(Input(shape=(512)))
model.add(Dense(64, activation=tf.nn.relu))
model.add(Dense(1, activation=tf.nn.sigmoid))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics='accuracy')
model.fit(X, y, batch_size=4, validation_split=0.1, epochs=5)
Now, for predict expand the dimension for batch-size
prediction = model.predict(tf.expand_dims(X[0], axis=0))
Output
1/1 [==============================] - 0s 119ms/step