Home > OS >  How to modify a variable inside the loss function in each epoch during training?
How to modify a variable inside the loss function in each epoch during training?

Time:10-26

I have a custom loss function. In each epoch I would like to either keep or throw away my input matrix randomly:

import random
from tensorflow.python.keras import backend
def decision(probability):
     return random.random() < probability

def my_throw_loss_in1(y_true, y_pred):
     if decision(probability=0.5):
         keep_mask = tf.ones_like(in1)
         total_loss = backend.mean(backend.square(y_true- y_pred)) * keep_mask
         print('Input1 is kept')
     else:
         throw_mask = tf.zeros_like(in1)
         total_loss =  backend.mean(backend.square(y_true- y_pred)) * throw_mask
         print('Input1 is thrown away')
     return total_loss


model.compile(loss= [ my_throw_loss_in1], 
          optimizer='Adam', 
          metrics=['mae'])

history2 = model.fit([x, y], batch_size=10, epochs=150, validation_split=0.2, shuffle=True)

but this would only set the decision value once and doesn't compile the loss in each epoch. How do I write a loss function that its variable can be modified in each epoch?

Here some thoughts:

  1. My first guess is to write a callback to pass an argument to the loss function but I did not succeed so far, basically it is not clear for me when I return a value from a callback then how can I pass that value to the loss function?

OR

  1. The other way around would be to write the loss function in a callback but then what do I pass to the callback as argument? and how do I compile a model with a loss function in a callback?

The loss function is based on this post.

CodePudding user response:

Just change your loss function as follows in order for it to be evaluated when fit(*) is called:

def my_throw_loss_in1(y_true, y_pred):

  probability = 0.5
  random_uniform = tf.random.uniform(shape=[], minval=0., maxval=1., dtype=tf.float32)
  condition = tf.less(random_uniform, probability)
  mask = tf.cond(condition, lambda: tf.ones_like(y_true), lambda: tf.zeros_like(y_true))

  total_loss = tf.keras.backend.mean(tf.keras.backend.square(y_true - y_pred)* mask) 
  tf.print(mask)
  return total_loss

First, a random number is generated and then a condition (random number less than the probability you defined) is created based on this number. Afterwards, you just use tf.cond to return tf.ones_like if your condition is True, otherwise tf.zeros_like. Finally, the mask is simply applied to your loss.

  • Related