Home > Software design >  Fire detection model incorrectly identifying all images as having no fire despite displaying 85% acc
Fire detection model incorrectly identifying all images as having no fire despite displaying 85% acc

Time:08-28

I recently completed my first deep learning model but after testing it I realized that it incorrectly identifies all images as being none fire despite indicating 85% accuracy during training. I am extremely confused as my dataset contained both fire and nonfire images and dedicated 90% of the dataset towards training the model.

import glob
import numpy as np
import cv2
from sklearn.model_selection import train_test_split

#reads the names of the dataset images
fire_images = glob.glob('1/*.jpg')

nonfire_images = glob.glob('0/*.jpg')

#creat a proccessing function for the images
def imageresize(path):
    img = cv2.imread(path) #read
    img = cv2.resize(img,(196,196))  # resize
    img = img / 255 #scale
    return img 
# loading the images as np.array

X = []
y = []

for image in fire_images:
    X.append(imageresize(image))
    y.append(1)  # assign label 1 to fire_images 
for image in nonfire_images:
    X.append(imageresize(image))
    y.append(0)  # assign label 0 to nonfire_images 

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42,shuffle=True)


#importing keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout
from tensorflow.keras.layers import GlobalMaxPooling2D, MaxPooling2D
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.models import Model

#Building model
model = Sequential()

model.add(Conv2D(128,(2,2),input_shape = (196,196,3),activation='relu'))
model.add(Conv2D(64,(2,2),activation='relu'))
model.add(MaxPooling2D())
model.add(Conv2D(32,(2,2),activation='relu'))
model.add(MaxPooling2D())

model.add(Flatten())
model.add(Dense(128))
model.add(Dense(1,activation= "sigmoid"))
#Model description
model.summary()

model.compile(optimizer='Adam', loss='binary_crossentropy', metrics=['accuracy'])
print(np.array(X_train).shape)
model.fit(np.array(X_train), np.array(y_train), validation_data=(np.array(X_test), np.array(y_test)),epochs=30, batch_size=32)

I got my dataset on Kaggle through this link:

https://www.kaggle.com/datasets/atulyakumar98/test-dataset?resource=download

CodePudding user response:

Why, 541 out of 651 dataset entries are non-fires, so always predicting 0 yields over 80% accuracy indeed.

Most likely, adjusting the classification threshold would be enough for better decision making, but you should decide on the metric you're actually going to pursue first. Should it be something intuitive like F1, just draw a precision/recall curve and find the optimal threshold point.

Equalizing the class population in the train set is also not that bad of an idea since you're likely going to expand it with image augmentations anyway.

  • Related