Home > Back-end >  ValueError: Input 0 of layer sequential_3 is incompatible with the layer: : expected min_ndim=4, fou
ValueError: Input 0 of layer sequential_3 is incompatible with the layer: : expected min_ndim=4, fou

Time:04-17

I am new to Keras and I am attempting to create a CNN that takes a (224,256,1) sized image as an input.

Here is the error I keep getting:

    ValueError: Input 0 of layer sequential_5 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: (None, 224, 256)

My interpretation of the error is that the data the layer got had 3 dimentions and the layer needs atleast 4 dimensions. According to the keras documentation, the input shape should be (batch size, x , y , channels). I am only using a single image as I believe the batch size should just be 1.

Here is the code for making the model:

model = keras.Sequential([
                          keras.layers.Conv2D(filters=32, kernel_size=(3,3), activation="relu", padding='same', input_shape=(224,256,1), data_format='channels_last'),
                          keras.layers.MaxPool2D(pool_size=(2,2), padding='same'), 
                          keras.layers.Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='same'),
                          keras.layers.MaxPool2D(pool_size=(2,2), padding='same'), 
                          keras.layers.Flatten(),
                          keras.layers.Dense(8, activation="softmax")
])

Here is the prediction code:

img = get_image()
img = convert_to_greyscale(img)
img = tf.expand_dims(img, axis=0) # add dimension to represent batch to the front
prediction = model.predict(img) # ValueError Input 0 of sequential_3 ...

let me know if you need anymore info, thanks!

CodePudding user response:

You need to add one dimension to the image then expand_dims as batch like below: (resize the image to the size of your model)

from skimage import io
img = io.imread('1.jpeg', as_gray=True)[...,None]
img = tf.image.resize(img, [224, 256])
# ------------------------- ^^^  ^^^ this is size of your input model
img = tf.expand_dims(img, axis=0)
model.predict(img) 

Output:

array([[0.1329774 , 0.1408476 , 0.13449876, 0.10563403, 0.11976303,
        0.12162709, 0.12393728, 0.12071485]], dtype=float32)
  • Related