Home > Software design >  Accuracy and val_accuracy don't change while training
Accuracy and val_accuracy don't change while training

Time:01-16

I tried to train my convolutional neural network using tensorflow and keras libraries. But the values of accuracy and val_accuracy didn't change the whole time. There is my neural network code:

from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D
import pickle

X = pickle.load(open("X.pickle", "rb"))
y = pickle.load(open("y.pickle", "rb"))

X = X/255.0

model = Sequential()
model.add(Conv2D(64, (3, 3), input_shape=X.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(64, activation="relu"))

model.add(Dense(1, activation="sigmoid"))

model.compile(loss="binary_crossentropy",
              optimizer="adam",
              metrics=["accuracy"])
model.fit(X, y, batch_size=10, epochs=10, validation_split=0.1)

There is the creation of traning data, features and labels (X - features, y - labels)


def create_training_data():
    for category in CATEGORIES:
        path = os.path.join(DATADIR, category)
        class_num = CATEGORIES.index(category)
        for img in os.listdir(path):
            try:
                img_array = cv2.imread(os.path.join(path, img), cv2.IMREAD_GRAYSCALE)
                new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
                training_data.append([new_array, class_num])
            except Exception as e:
                pass

create_training_data()

random.shuffle(training_data)

X = []
y = []
for features, label in training_data:
    X.append(features)
    y.append(label)

X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)
y = np.array(y)

And this is the log of training:

2023-01-15 00:36:42.368335: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Epoch 1/10
70/70 [==============================] - 45s 619ms/step - loss: 0.3039 - accuracy: 0.9627 - val_loss: 0.1211 - val_accuracy: 0.9744
Epoch 2/10
70/70 [==============================] - 42s 600ms/step - loss: 0.1524 - accuracy: 0.9670 - val_loss: 0.1189 - val_accuracy: 0.9744
Epoch 3/10
70/70 [==============================] - 42s 600ms/step - loss: 0.1537 - accuracy: 0.9670 - val_loss: 0.1622 - val_accuracy: 0.9744
Epoch 4/10
70/70 [==============================] - 44s 627ms/step - loss: 0.1563 - accuracy: 0.9670 - val_loss: 0.1464 - val_accuracy: 0.9744
Epoch 5/10
70/70 [==============================] - 42s 604ms/step - loss: 0.1591 - accuracy: 0.9670 - val_loss: 0.1185 - val_accuracy: 0.9744
Epoch 6/10
70/70 [==============================] - 42s 605ms/step - loss: 0.1511 - accuracy: 0.9670 - val_loss: 0.1338 - val_accuracy: 0.9744
Epoch 7/10
70/70 [==============================] - 49s 698ms/step - loss: 0.1623 - accuracy: 0.9670 - val_loss: 0.1188 - val_accuracy: 0.9744
Epoch 8/10
70/70 [==============================] - 50s 709ms/step - loss: 0.1480 - accuracy: 0.9670 - val_loss: 0.1397 - val_accuracy: 0.9744
Epoch 9/10
70/70 [==============================] - 45s 637ms/step - loss: 0.1508 - accuracy: 0.9670 - val_loss: 0.1203 - val_accuracy: 0.9744
Epoch 10/10
70/70 [==============================] - 47s 665ms/step - loss: 0.1716 - accuracy: 0.9670 - val_loss: 0.1238 - val_accuracy: 0.9744

Process finished with exit code 0

What should I do to fix this problem?

CodePudding user response:

Hmm accuracy and validation accuracy are high even on the first epoch. Try using a lower learning rate in the Adam optimizer say .0002, On the first epoch pay attention to the loss and validation loss as the batches are process. It should start low and gradually increase during the epoch.

CodePudding user response:

There are a couple potential reasons as to why you are facing this:

  1. Your dataset is far too small. If your validation set is tiny, there is a high probability that your model will get the same % of predictions correct/incorrect

  2. There is a great imbalance in your dataset. If one class heavily outweighs another, your model will favor the majority class, and predict it no matter what, as that is what brings the optimal accuracy for the model.

From what I see, there is nothing wrong with your code, rather modifications that need to be made to the dataset itself.

  • Related