I tried a lot to make a change in my validation accuracy by adding layers and dropout but still, I have no change yet my accuracy is upper than 95% and my validation accuracy is always stuck in 88%.
my split :
x_train,x_validate,y_train,y_validate = train_test_split(x_train,y_train,test_size = 0.2,random_state = 42)
after splitting data (shape) :
x_train shape: (5850,)
x_train shape: (5850,)
x_validate shape: (1463,)
y_validate shape: (1463,)
x_test shape: (2441,)
y_test shape: (2441,)
width and height and number of channels :
width, height, channels = 64, 64, 3
after converting images to array (shape) :
Training set shape : (5850, 64, 64, 3)
Validation set shape : (1463, 64, 64, 3)
Test set shape : (2441, 64, 64, 3)
and I have 6 classes
augmentation :
datagen = ImageDataGenerator(
featurewise_center=True,
samplewise_center=True,
featurewise_std_normalization=True,
samplewise_std_normalization=True,
zca_whitening=False,
rotation_range=0.9,
zoom_range = 0.7,
width_shift_range=0.8,
height_shift_range=0.8,
horizontal_flip=True,
vertical_flip=True)
datagen.fit(x_train)
my Sequential :
model = Sequential()
model.add(Conv2D(16,(3,3),input_shape = (224,224,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(32,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(64,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(128,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(256,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dense(256))
model.add(Activation("relu"))
model.add(Dropout(0.3))
model.add(Dense(256))
model.add(Activation("relu"))
model.add(Dropout(0.2))
model.add(Dense(numberOfClass)) # output
model.add(Activation("softmax"))
model.compile(loss = "binary_crossentropy",
optimizer = "adam",
metrics = ["accuracy"])
batch_size = 256
I put an early stopping in my code to make the best less validation loss, how can improve my validation accuracy to at least 92%?
Epoch 1/100
29/29 [==============================] - 62s 2s/step - loss: 0.4635 - accuracy: 0.3040 - val_loss: 0.4227 - val_accuracy: 0.4007
Epoch 00001: val_loss improved from inf to 0.42266, saving model to ./model_best_weights.h5
Epoch 2/100
29/29 [==============================] - 60s 2s/step - loss: 0.4230 - accuracy: 0.3260 - val_loss: 0.4046 - val_accuracy: 0.3314
Epoch 00002: val_loss improved from 0.42266 to 0.40463, saving model to ./model_best_weights.h5
Epoch 3/100
29/29 [==============================] - 60s 2s/step - loss: 0.3833 - accuracy: 0.4234 - val_loss: 0.3417 - val_accuracy: 0.5125
Epoch 00003: val_loss improved from 0.40463 to 0.34174, saving model to ./model_best_weights.h5
Epoch 4/100
29/29 [==============================] - 60s 2s/step - loss: 0.3351 - accuracy: 0.5040 - val_loss: 0.3108 - val_accuracy: 0.5432
Epoch 00004: val_loss improved from 0.34174 to 0.31083, saving model to ./model_best_weights.h5
Epoch 5/100
29/29 [==============================] - 59s 2s/step - loss: 0.3002 - accuracy: 0.5683 - val_loss: 0.2655 - val_accuracy: 0.6247
Epoch 00005: val_loss improved from 0.31083 to 0.26553, saving model to ./model_best_weights.h5
Epoch 6/100
29/29 [==============================] - 60s 2s/step - loss: 0.2794 - accuracy: 0.6025 - val_loss: 0.2677 - val_accuracy: 0.6194
Epoch 00006: val_loss did not improve from 0.26553
Epoch 7/100
29/29 [==============================] - 60s 2s/step - loss: 0.2606 - accuracy: 0.6374 - val_loss: 0.2524 - val_accuracy: 0.6477
Epoch 00007: val_loss improved from 0.26553 to 0.25239, saving model to ./model_best_weights.h5
Epoch 8/100
29/29 [==============================] - 59s 2s/step - loss: 0.2400 - accuracy: 0.6751 - val_loss: 0.2232 - val_accuracy: 0.6997
Epoch 00008: val_loss improved from 0.25239 to 0.22320, saving model to ./model_best_weights.h5
Epoch 9/100
29/29 [==============================] - 60s 2s/step - loss: 0.2307 - accuracy: 0.6875 - val_loss: 0.2092 - val_accuracy: 0.7181
Epoch 00009: val_loss improved from 0.22320 to 0.20916, saving model to ./model_best_weights.h5
Epoch 10/100
29/29 [==============================] - 59s 2s/step - loss: 0.2085 - accuracy: 0.7284 - val_loss: 0.2092 - val_accuracy: 0.7255
Epoch 00010: val_loss did not improve from 0.20916
Epoch 11/100
29/29 [==============================] - 60s 2s/step - loss: 0.1961 - accuracy: 0.7463 - val_loss: 0.1943 - val_accuracy: 0.7603
Epoch 00011: val_loss improved from 0.20916 to 0.19435, saving model to ./model_best_weights.h5
Epoch 12/100
29/29 [==============================] - 60s 2s/step - loss: 0.1894 - accuracy: 0.7621 - val_loss: 0.1829 - val_accuracy: 0.7669
Epoch 00012: val_loss improved from 0.19435 to 0.18294, saving model to ./model_best_weights.h5
Epoch 13/100
29/29 [==============================] - 60s 2s/step - loss: 0.1766 - accuracy: 0.7770 - val_loss: 0.1751 - val_accuracy: 0.7780
Epoch 00013: val_loss improved from 0.18294 to 0.17508, saving model to ./model_best_weights.h5
Epoch 14/100
29/29 [==============================] - 60s 2s/step - loss: 0.1606 - accuracy: 0.8006 - val_loss: 0.1666 - val_accuracy: 0.8005
Epoch 00014: val_loss improved from 0.17508 to 0.16662, saving model to ./model_best_weights.h5
Epoch 15/100
29/29 [==============================] - 60s 2s/step - loss: 0.1531 - accuracy: 0.8105 - val_loss: 0.1718 - val_accuracy: 0.7816
Epoch 00015: val_loss did not improve from 0.16662
Epoch 16/100
29/29 [==============================] - 61s 2s/step - loss: 0.1449 - accuracy: 0.8265 - val_loss: 0.1600 - val_accuracy: 0.8083
Epoch 00016: val_loss improved from 0.16662 to 0.16000, saving model to ./model_best_weights.h5
Epoch 17/100
29/29 [==============================] - 62s 2s/step - loss: 0.1309 - accuracy: 0.8419 - val_loss: 0.1609 - val_accuracy: 0.8202
Epoch 00017: val_loss did not improve from 0.16000
Epoch 18/100
29/29 [==============================] - 60s 2s/step - loss: 0.1165 - accuracy: 0.8607 - val_loss: 0.1572 - val_accuracy: 0.8222
Epoch 00018: val_loss improved from 0.16000 to 0.15722, saving model to ./model_best_weights.h5
Epoch 19/100
29/29 [==============================] - 60s 2s/step - loss: 0.1109 - accuracy: 0.8711 - val_loss: 0.1523 - val_accuracy: 0.8370
Epoch 00019: val_loss improved from 0.15722 to 0.15225, saving model to ./model_best_weights.h5
Epoch 20/100
29/29 [==============================] - 60s 2s/step - loss: 0.1008 - accuracy: 0.8877 - val_loss: 0.1405 - val_accuracy: 0.8484
Epoch 00020: val_loss improved from 0.15225 to 0.14046, saving model to ./model_best_weights.h5
Epoch 21/100
29/29 [==============================] - 60s 2s/step - loss: 0.1063 - accuracy: 0.8764 - val_loss: 0.1514 - val_accuracy: 0.8390
Epoch 00021: val_loss did not improve from 0.14046
Epoch 22/100
29/29 [==============================] - 61s 2s/step - loss: 0.0880 - accuracy: 0.8979 - val_loss: 0.1423 - val_accuracy: 0.8550
Epoch 00022: val_loss did not improve from 0.14046
Epoch 23/100
29/29 [==============================] - 60s 2s/step - loss: 0.0750 - accuracy: 0.9196 - val_loss: 0.1368 - val_accuracy: 0.8632
Epoch 00023: val_loss improved from 0.14046 to 0.13678, saving model to ./model_best_weights.h5
Epoch 24/100
29/29 [==============================] - 60s 2s/step - loss: 0.0712 - accuracy: 0.9218 - val_loss: 0.1520 - val_accuracy: 0.8521
Epoch 00024: val_loss did not improve from 0.13678
Epoch 25/100
29/29 [==============================] - 60s 2s/step - loss: 0.0664 - accuracy: 0.9288 - val_loss: 0.1600 - val_accuracy: 0.8451
Epoch 00025: val_loss did not improve from 0.13678
Epoch 26/100
29/29 [==============================] - 60s 2s/step - loss: 0.0605 - accuracy: 0.9360 - val_loss: 0.1528 - val_accuracy: 0.8636
Epoch 00026: val_loss did not improve from 0.13678
Epoch 00026: early stopping
images of my graph :
https://i.imgur.com/pNYwcE8.jpg
https://i.imgur.com/ZCSRI8e.jpg
CodePudding user response:
You should experiment more, but glancing at your code, I can give you the following tips:
- according to the plot, validation accuracy is increasing a bit even in the end, maybe you can try to increase
EarlyStopping
patience and monitor validation accuracy instead of validation loss - add batch normalization into your architecture
- increase dropout rate, maybe to some value between 0.4 and 0.7
- tune learning rate, and maybe use some learning rate scheduler like ReduceLROnPlateau which might help to train even further after there is no increase in validation metrics
Good luck!