Home > Enterprise >  How to debug a custom loss function during model fitting?
How to debug a custom loss function during model fitting?

Time:10-21

I would like to see what is happening in my loss function during model fitting.

However, I cannot figure out how to do that.

This is what I am trying but it does not work:

def custom_loss(label : tf.Tensor, pred : tf.Tensor) -> tf.Tensor:
    mask = label != 0
    loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=True, reduction='none')
    loss = loss_object(label, pred)
    mask = tf.cast(mask, dtype=loss.dtype)
    tf.print("\n---------------------------")
    tf.print("custom_loss - str(loss):", str(loss))
    tf.print("custom_loss - str(mask):", str(mask))
    try:
        tf.print("tf.keras.backend.eval(loss):", tf.keras.backend.eval(loss))
    except:
        tf.print("tf.keras.backend.eval(loss) does not work - exception!")
    loss = tf.reshape(loss, shape=(batch_size, loss.shape[1], 1))
    loss *= mask

    loss = tf.reduce_sum(loss)/tf.reduce_sum(mask)
    tf.print("\n============================")
    return loss

After starting training by calling the fit() function I only get the following output:

  2/277 [..............................] - ETA: 44s - loss: 0.6931 - masked_accuracy: 0.0000e 00 
---------------------------
custom_loss - str(loss): Tensor("custom_loss/binary_crossentropy/weighted_loss/Mul:0", shape=(None, 20), dtype=float32)
custom_loss - str(mask): Tensor("custom_loss/Cast:0", shape=(None, 20, 1), dtype=float32)
tf.keras.backend.eval(loss) does not work - exception!

How do I display the actual value of label, pred, mask and loss?

CodePudding user response:

In TF 2 Keras, it can be done by training the model in eager mode, i.e run_eagerly=True with model.fit. It's an argument avilable in model.compile method. From doc.

run_eagerly: Bool. Defaults to False. If True, this Model's logic will not be wrapped in a tf.function.


Now, the end-to-end solution can be achieved in many ways, i.e with straightforward method model.fit or customize the fit method. Here are some pointer.

loss_object = keras.losses.BinaryCrossentropy(
    from_logits=True, 
    reduction='none'
)

def custom_loss(label : tf.Tensor, pred : tf.Tensor) -> tf.Tensor:
    mask = label != 1
    loss = loss_object(label, pred)
    mask = tf.cast(mask, dtype=loss.dtype)
    
    if tf.executing_eagerly():
        print("custom_loss - str(loss): \n", str(loss))
        print("custom_loss - str(mask): \n", str(mask), '\n'*2)
    
    loss = tf.reshape(loss, shape=(tf.shape(loss)[0], -1))
    loss *= mask
    loss = tf.reduce_sum(loss) / tf.reduce_sum(mask)
    return loss

With vanila model.fit:

# Construct an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1, activation=None)(inputs)
model = keras.Model(inputs, outputs)

# We don't passs a loss or metrics here.
model.compile(optimizer="adam", loss=custom_loss, run_eagerly=True)

# Just use `fit` as usual -- you can use callbacks, etc.
x = tf.random.normal([10, 32], 0, 1, tf.float32)
y = np.random.randint(2, size=(10, 1))

model.fit(x, y, epochs=5)
custom_loss - str(loss): 
 tf.Tensor(
[0.3215071  0.6470841  3.401876   1.6478868  0.4492059  0.67835623
 0.1574089  1.3314284  1.9282155  0.5588544 ], shape=(10,), dtype=float32)
custom_loss - str(mask): 
 tf.Tensor(
[[0.]
 [0.]
 [1.]
 [1.]
 [1.]
 [0.]
 [0.]
 [0.]
 [0.]
 [0.]], shape=(10, 1), dtype=float32) 


1/1 [==============================] - 0s 20ms/step - loss: 1.8330
<keras.callbacks.History at 0x7f4332ef4d10>

Or, with custom model.fit, the output would be same as above.

class CustomModel(keras.Model):
    def train_step(self, data):
        x, y = data

        # notice, istead of print value in custom loss -
        # I can do the same here
        with tf.GradientTape() as tape:
            y_pred = self(x, training=True)  # Forward pass
            # Compute our own loss
            loss = custom_loss(y, y_pred)
        
        # Compute gradients
        trainable_vars = self.trainable_variables
        gradients = tape.gradient(loss, trainable_vars)

        # Update weights
        self.optimizer.apply_gradients(zip(gradients, trainable_vars))
        return {"loss": loss}


# Construct an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1, activation=None)(inputs)
model = CustomModel(inputs, outputs)
model.compile(optimizer="adam", run_eagerly=True)
model.fit(x, y)

And lastly, if you wanna go with more low-level operation, then you can use custom training loop. Check the mentioned blogs, they're pretty resourceful.

  • Related