Home > database >  Custom loss function for time series data
Custom loss function for time series data

Time:02-16

I am trying to write a custom loss function for the very first time. My model generates a time series data and I want a loss function which would penalize errors later in the series more than the earlier ones. Something like where index of tensor is used to determine the penalty. The tensors have following structure.

y_true <tf.Tensor 'IteratorGetNext:1' shape=(None, 48, 1) dtype=float32>

y_pred <tf.Tensor 'ResNet34/dense_1/BiasAdd:0' shape=(None, 48, 1) dtype=float32>

What should I do to make the penalty a function of index?

 def custom_loss_function(y_true, y_pred):
   squared_difference = tf.square(y_true - y_pred) * 'sqrt(tensor_index)'  <- Desired part
   return tf.reduce_mean(squared_difference, axis=-1)

CodePudding user response:

Maybe try using tf.linspace:

import tensorflow as tf

y_true = tf.random.normal((1, 48, 1))
y_pred = tf.random.normal((1, 48, 1))

def custom_loss_function(y_true, y_pred):
   penalty = tf.cast(tf.linspace(start = 1, stop = 5, num = y_pred.shape[1]), dtype=tf.float32)
   print(penalty)
   squared_difference = tf.square(y_true - y_pred) * tf.expand_dims(penalty, axis=-1)
   return tf.reduce_mean(squared_difference, axis=-1)

print(custom_loss_function(y_true, y_pred))
tf.Tensor(
[1.        1.0851064 1.1702127 1.2553191 1.3404255 1.4255319 1.5106384
 1.5957447 1.6808511 1.7659575 1.8510638 1.9361702 2.0212767 2.106383
 2.1914895 2.2765958 2.3617022 2.4468086 2.531915  2.6170213 2.7021277
 2.787234  2.8723404 2.9574468 3.0425532 3.1276596 3.212766  3.2978723
 3.3829787 3.468085  3.5531914 3.6382978 3.7234042 3.8085105 3.893617
 3.9787233 4.06383   4.1489363 4.2340426 4.319149  4.4042554 4.489362
 4.574468  4.6595745 4.744681  4.8297873 4.9148936 5.       ], shape=(48,), dtype=float32)
tf.Tensor(
[[1.3424503e 00 1.7936407e 00 9.5141016e-02 4.1933870e-01 2.9060142e-02
  1.6663458e 00 3.7182972e 00 2.3884547e-01 1.6393075e 00 9.8062935e 00
  1.4726014e 00 6.4087069e-01 1.4197667e 00 2.7730075e-01 2.6717324e 00
  1.2410884e 01 2.8422637e 00 2.2836231e 01 1.9438576e 00 7.2612977e-01
  2.9226139e 00 1.3040878e 01 5.8225789e 00 2.3456068e 00 2.8281093e 00
  4.2308202e 00 2.6682162e 00 4.0025130e-01 3.5946998e-01 8.0574770e-03
  2.7833527e-01 3.8349494e-01 7.1913116e-02 3.0325607e-03 5.8022089e 00
  4.4835452e-02 4.7429881e 00 6.4035267e-01 5.0330186e 00 2.7156603e 00
  1.2085355e-01 3.5016473e-02 7.9860941e-02 3.1455503e 01 5.3314602e 01
  3.8006527e 01 1.1620968e 01 4.1495290e 00]], shape=(1, 48), dtype=float32)

Update 1:

import tensorflow as tf

y_true = tf.random.normal((2, 48, 1))
y_pred = tf.random.normal((2, 48, 1))
def custom_loss_function(y_true, y_pred):
   penalty = tf.cast(tf.linspace(start = 1, stop = 5, num = tf.shape(y_pred)[1]), dtype=tf.float32)
   penalty = tf.expand_dims(penalty, axis=-1)
   penalty = tf.expand_dims(tf.transpose(tf.repeat(penalty, repeats=tf.shape(y_pred)[0], axis=1)), axis=-1)
   squared_difference = tf.square(y_true - y_pred) * penalty
   return tf.reduce_mean(squared_difference, axis=-1)
  • Related