Home > other >  Keras Model early stops even though min_delta condition is not achieved
Keras Model early stops even though min_delta condition is not achieved

Time:04-11

I am training a Keras Sequential Model as follows. It is for the mnist dataset for 5 numbers. In goes the 28x28 images flattened and out comes a one hot notation for the class that they belong to.

model = keras.Sequential([
keras.layers.InputLayer(input_shape = (784, )),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dense(15, activation='relu'),
keras.layers.Dense(3, activation='relu'),
keras.layers.Dense(5, activation='softmax')

])

optzr = keras.optimizers.SGD(learning_rate=0.001, momentum=0.0, nesterov=False)
es = keras.callbacks.EarlyStopping(monitor='loss', min_delta=0.0001, verbose=2)
model.compile(optimizer=optzr, loss='categorical_crossentropy', metrics=['accuracy'])
out = model.fit(xtrain, ytrain, validation_data=(xval, yval), batch_size=32, verbose=2, epochs=20, callbacks=[es])

On running the model, this is what the output is

Epoch 1/20
356/356 - 2s - loss: 1.7157 - accuracy: 0.1894 - val_loss: 1.6104 - val_accuracy: 0.1997 - 2s/epoch - 5ms/step
Epoch 2/20
356/356 - 1s - loss: 1.6094 - accuracy: 0.1946 - val_loss: 1.6102 - val_accuracy: 0.1997 - 1s/epoch - 3ms/step
Epoch 00002: early stopping

Here, even though the loss decreased by more than 0.1, the model declares to have met the condition for early stopping and stops training.

CodePudding user response:

You should set patience to 1 in the callback definition. If you don't, it defaults to 0.

es = keras.callbacks.EarlyStopping(monitor='loss', min_delta=1e-4, verbose=2, patience=1)

CodePudding user response:

Keras implements EarlyStopping by keeping an internal variable named wait. This variable increases by one per epoch if performance does not improve by min_delta, and resets to 0 otherwise. Training then stops if wait is greater than or equal patience.

# Only check after the first epoch.
if self.wait >= self.patience and epoch > 0:
  self.stopped_epoch = epoch
  self.model.stop_training = True

Since patience defaults to 0, self.wait >= self.patience is always True as soon as first epoch has passed (as soon as epoch > 0).

To stop as soon as performance stops improving, you actually want to set patience to 1 and not 0.

  • Related