Home > database >  Binary Image Classifer in Keras Shows Properly Decreasing Loss but Constant accuracy
Binary Image Classifer in Keras Shows Properly Decreasing Loss but Constant accuracy

Time:04-17

I have a basic keras image classifier used on grayscale 64x64 images pulled out from local folders. The classifier runs without errors, yet accuracy remains at a near-constant 50% across epochs and there is some kind of error sabotaging the program.

model = Sequential()
trainData = tensorflow.keras.preprocessing.image_dataset_from_directory(
    directory='TrainingData/',
    labels='inferred',
    label_mode='int',
    color_mode="grayscale",
    batch_size=5,
    image_size=(64, 64))
testData = tensorflow.keras.preprocessing.image_dataset_from_directory(
    directory='TestingData/',
    labels='inferred',
    label_mode='int',
    color_mode="grayscale",
    batch_size=5,
    image_size=(64, 64))

trainDataImage = np.concatenate([ x for x, y in trainData ], axis=0)
trainDataLabel = np.concatenate([ y for x, y in trainData ], axis=0)

testDataImage  = np.concatenate([ x for x, y in testData  ], axis=0)
testDataLabel  = np.concatenate([ y for x, y in testData  ], axis=0)
model.add(Conv2D(60, kernel_size = 1, activation='relu', input_shape=(64, 64, 1), padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(35, kernel_size = 1, activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(MaxPool2D(2))
model.add(Conv2D(20, kernel_size = 1, activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(MaxPool2D(2))
model.add(Conv2D(10, kernel_size = 1, activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(2, activation='softmax'))
model.summary()
def Reshaper(var, imNumber, isImage):
    if(isImage==True):
        np.reshape(var, (imNumber, 64, 64, 1))
    else:
        np.reshape(var, (imNumber, 1))
    return var

trainDataImage = Reshaper(trainDataImage, 2800, True)
trainDataLabel = Reshaper(trainDataLabel, 2800, False)

def OneHotEncode(DataLabel, labelNum):
    OneHot = np.zeros((labelNum, 2))
    count=0
    while (count < labelNum):
    OneHot[count][DataLabel[count].astype(int)] = 1
    count=count 1
    return OneHot

trainDataLabel = OneHotEncode(trainDataLabel, 2800)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

model.fit(trainDataImage, trainDataLabel, epochs=5, batch_size=5)

testDataImage = Reshaper(testDataImage, 400, True)
testDataLabel = Reshaper(testDataLabel, 400, False)
testDataLabel = OneHotEncode(testDataLabel, 400)

output = model.evaluate(testDataImage, testDataLabel, verbose=True, batch_size=5)

print("The model loss and accuracy respectively:", output)

model.save('Model.h5')

CodePudding user response:

I was able to fix this. I'm still uncertain what the issue is exactly, but the error was within the lines of code initializing trainData/testData and the code to split them into images/labels. I fixed it by simply importing image_dataset_loader, which allowed me to import from a file structure with the dataset already split into images and labels.

Here's what I used:

(trainDataImage, trainDataLabel), (testDataImage, testDataLabel) = image_dataset_loader.load('./Dataset', ['TrainingData', 'TestingData'])

CodePudding user response:

Try using a balanced dataset. I encountered this problem in my last project.

  • Related