Home > Net >  model.fit() never calls custom metric specified
model.fit() never calls custom metric specified

Time:11-10

Following the documentation I'm trying to implement a custom metric, however the metric never gets called. I added sanity checks that should produce errors when the metric is called. You can find the full example in this notebook.

def my_metric_fn(y_true, y_pred):
    1 / 0
    while True:
        pass
    squared_difference = tf.square(y_true - y_pred)
    return tf.reduce_mean(squared_difference, axis=-1)  

However, the code runs without a problem after passing the metric to model.compile()

model.compile(optimizer=opt, metrics=[my_metric_fn])
history = model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=epochs,
callbacks=[early_stopping]
)

What I actually get:

Epoch 1/100
59/59 [==============================] - 27s 451ms/step - loss: 16.3928 - my_metric_fn: 0.0000e 00 - val_loss: 16.5252 - val_my_metric_fn: 0.0000e 00
Epoch 2/100
59/59 [==============================] - 25s 420ms/step - loss: 16.3508 - my_metric_fn: 0.0000e 00 - val_loss: 16.5316 - val_my_metric_fn: 0.0000e 00
Epoch 3/100
59/59 [==============================] - 25s 420ms/step - loss: 16.3420 - my_metric_fn: 0.0000e 00 - val_loss: 16.5372 - val_my_metric_fn: 0.0000e 00
Epoch 4/100
59/59 [==============================] - 25s 417ms/step - loss: 16.3365 - my_metric_fn: 0.0000e 00 - val_loss: 16.5287 - val_my_metric_fn: 0.0000e 00
Epoch 5/100
59/59 [==============================] - 25s 418ms/step - loss: 16.3251 - my_metric_fn: 0.0000e 00 - val_loss: 16.5271 - val_my_metric_fn: 0.0000e 00

CodePudding user response:

I think you are running into this bug when using metrics and add_loss together. Maybe try explicitly adding your metric to your custom layer:

def my_metric_fn(y_true, y_pred):
    squared_difference = tf.square(y_true - y_pred)
    return tf.reduce_mean(y_true, axis=-1)  

class CTCLayer(layers.Layer):
    def __init__(self, name=None):
        super().__init__(name=name)
        self.loss_fn = keras.backend.ctc_batch_cost

    def call(self, y_true, y_pred):
        # Compute the training-time loss value and add it
        # to the layer using `self.add_loss()`.
        batch_len = tf.cast(tf.shape(y_true)[0], dtype="int64")
        input_length = tf.cast(tf.shape(y_pred)[1], dtype="int64")
        label_length = tf.cast(tf.shape(y_true)[1], dtype="int64")

        input_length = input_length * tf.ones(shape=(batch_len, 1), dtype="int64")
        label_length = label_length * tf.ones(shape=(batch_len, 1), dtype="int64")

        loss = self.loss_fn(y_true, y_pred, input_length, label_length)
        self.add_loss(loss)
        self.add_metric(my_metric_fn(y_true, y_pred), 'my_metric_fn')

        # At test time, just return the computed predictions
        return y_pred
Epoch 1/100
15/59 [======>.......................] - ETA: 17s - loss: 34.6694 - my_metric_fn: 10.3142

Or add it at the end of your model:

opt = keras.optimizers.Adam()
# Compile the model and return
model.compile(optimizer=opt)
model.add_metric(my_metric_fn(...), 'my_metric_fn')
  • Related