Home > Back-end >  Input 0 of layer "conv2d_24" is incompatible with the layer: expected min_ndim=4, found nd
Input 0 of layer "conv2d_24" is incompatible with the layer: expected min_ndim=4, found nd

Time:08-18

I am currently attempting to make my first deep learning model which is meant to detect fire and smoke in images.

The code works up until I attempt to fit the model where it throws the error

ValueError: Exception encountered when calling layer "sequential_8" (type Sequential).   
Input 0 of layer "conv2d_24" is incompatible with the layer: expected min_ndim=4, found ndim=2. Full shape received: (None, 1)

I loaded my images through glob:

fire = glob.glob('1/*.png')
Nonfire = glob.glob('0/*.png')

then turned them into lists to merge them into a pandas data frame.

Then I split my data into training and testing like this


test_size = int(len(df) * 0.1) # the test data will be 25% of the entire data
train = df.iloc[:-test_size,:].copy() 
test = df.iloc[-test_size:,:].copy()
print(train.shape, test.shape)

x_train = train.drop('label',axis=1).copy()
y_train = train[['label']].copy()

x_test = test.drop('label',axis=1).copy()
y_test = test[['label']].copy()

and then built and compiled the model like this

model = Sequential()

model.add(Conv2D(128,(2,2),input_shape = (196,196,3),activation='relu'))
model.add(Conv2D(64,(2,2),activation='relu'))
model.add(MaxPooling2D())
model.add(Conv2D(32,(2,2),activation='relu'))
model.add(MaxPooling2D())

model.add(Flatten())
model.add(Dense(128))
model.add(Dense(1,activation= "sigmoid"))
#Model description
model.summary()

model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

and all of the code works without error.

However, when I attempt to fit the model like this

model.fit(x_train,y_train,validation_data=(x_test,y_test),epochs = 30,batch_size = 32)

it throws the above error and I am unable to understand why.

CodePudding user response:

You probably have an issue with your x_train data. What is the result of this print if you execute it right before calling fit()?

print(train.shape, test.shape)

In your first layer you set the input shape of (196, 196, 3) the batch size -> four dimensions. But you're giving to fit something like this (1) the batch size (the None in the error message) -> two dimensions.

So you probably just have an error in your train shape. It is hard to figure it out without having your dataset, but with some debug you should be able to solve it.

Update:

I've built this pre-processing script for your data. I've tested it on some dummy images, however it should work as you need:

import glob
import numpy as np
import cv2
from sklearn.model_selection import train_test_split

#reads the names of the dataset images
fire_images = glob.glob('1/*.png')
nonfire_images = glob.glob('0/*.png')

print(fire_images)
>> ['1/1.png', '1/0.png']
print(nonfire_images)
>> ['0/1.png', '0/0.png']

# loading the images as np.array

X = []
y = []

for image in fire_images:
  X.append(cv2.imread(image))
  y.append(1)  # assign label 1 to fire_images 
for image in nonfire_images:
  X.append(cv2.imread(image))
  y.append(0)  # assign label 0 to nonfire_images 

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)

#importing keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout
from tensorflow.keras.layers import GlobalMaxPooling2D, MaxPooling2D
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.models import Model

#Building model
model = Sequential()

model.add(Conv2D(128,(2,2),input_shape = (196,196,3),activation='relu'))
model.add(Conv2D(64,(2,2),activation='relu'))
model.add(MaxPooling2D())
model.add(Conv2D(32,(2,2),activation='relu'))
model.add(MaxPooling2D())

model.add(Flatten())
model.add(Dense(128))
model.add(Dense(1,activation= "sigmoid"))
#Model description
model.summary()

model.compile(optimizer='Adam', loss='binary_crossentropy', metrics=['accuracy'])

model.fit(np.array(X_train), np.array(y_train), validation_data=(np.array(X_test), np.array(y_test)),epochs=30, batch_size=32)

I've used train_test_split from sklearn.model_selection to split the dataset. Note that the argument random_state is important. When you load the data, the fire and non fire images are in two different folders, so the array presents the first rows with fire data, and last rows with non fire data. If you split the dataset without shuffling you'll get your test data with only non-fire data and you'll get low results from your training.

Your problem was basically at the start of your script. You were creating a dataframe with, inside the column images, the names of your files, and not the array of the actual data.

  • Related