Home > front end >  Sample-dependent parameters in a custom loss function
Sample-dependent parameters in a custom loss function

Time:04-09

I have an autoencoder written using tf.keras, which deals with 2D images. To train the autoencoder I use a custom loss function. To improve the loss function I would like to add two parameters related to the training samples. These data, however, are different for each sample. Thus my data are like this:

  • Image_1, (a_1, b_1)
  • Image_2, (a_2, b_2)
  • ...
  • Image_n, (a_n, b_n)

Is there a trick how to pass these parameters to the custom loss function? I was trying to use two inputs with one output, however, I have no idea how to refer to the Image and parameters.

Thank you in advance.

CodePudding user response:

If your dataset consists of samples: Image_1, (a_1, b_1)...and so on, you can use a custom training loop and you will have all the flexibility you need. Here is an example with a random custom loss function and dataset, since I do not know the details of your project:

import tensorflow as tf
import pathlib

dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)
train_ds = tf.keras.utils.image_dataset_from_directory(
  data_dir,
  image_size=(28, 28),
  batch_size=32)

normalization_layer = tf.keras.layers.Rescaling(1./255)

def change_inputs(images, _):
  x = tf.image.resize(normalization_layer(images),[28, 28], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
  return x, x

def custom_loss(x, x_hat, a, b):
  return tf.reduce_mean(tf.math.squared_difference(x, x_hat)) * tf.reduce_mean((a-b))

a = tf.random.normal((3670,))
b = tf.random.normal((3670,))
extra_ds = tf.data.Dataset.from_tensor_slices((a, b)).batch(32)
train_ds = train_ds.map(change_inputs)
train_dataset = tf.data.Dataset.zip((train_ds, extra_ds))

input_img = tf.keras.Input(shape=(28, 28, 3))
x = tf.keras.layers.Flatten()(input_img)
x = tf.keras.layers.Dense(28 * 28 * 3, activation='relu')(x)
output = tf.keras.layers.Reshape(target_shape=(28, 28 ,3))(x)
autoencoder = tf.keras.Model(input_img, output)

optimizer = tf.keras.optimizers.Adam()
epochs = 2
for epoch in range(epochs):
    print("\nStart of epoch %d" % (epoch,))

    for step, x_batch_train in enumerate(train_dataset):
        x, _ = x_batch_train[0]
        a, b = x_batch_train[1]
        with tf.GradientTape() as tape:

            x_hat = autoencoder(x, training=True) 
            loss_value = custom_loss(x, x_hat, a, b)

        grads = tape.gradient(loss_value, autoencoder.trainable_weights)
        optimizer.apply_gradients(zip(grads, autoencoder.trainable_weights))

        # Log every 200 batches.
        if step % 200 == 0:
            print("Training loss (for one batch) at step %d: %.4f"% (step, float(loss_value)))
            print(loss_value.numpy())
            print("Seen so far: %s samples" % ((step   1) * batch_size))
  • Related