I'm trying to build a custom loss function for my model, but whenever I try to convert Tensors into .numpy() arrays with run_eagerly = True, it gives "WARNING: Gradients do not exist for variables ...". So I debugged other custom loss functions implemented using TensorFlow. But in my case, I need to apply to mask and splitting index arrays and then use those arrays as indices to apply some sort of arithmetic functions using broadcasting. But I retrieved indices lists after masking, but I just have to access those indices and add specific functions. But I found no way in TensorFlow to implement that in a vectorized way.
error = y_true - y_false
print(y_true.shape, y_pred.shape)
print(error.shape)
print("Error values: ", error)
Output: (10, 1000), (10, 1000)
(10, 1000)
Error values: <tf.Tensor: shape=(10, 1000), dtype=float64, numpy= array([[-10, 0, 8, ..., 3, -1.5, -2.5], ..., [ 2.5, 8 , 6.5, ..., 5.5, 3.5, -0.5]])>
mask = tf.where(y_true > 5)
i = mask[0]
j = mask[1]
print(i[:5])
print(j[:5])
Results:
(<tf.Tensor: shape=(5,), dtype=int64, numpy=array([0, 0, 0, 0, 0], dtype=int64)>,
<tf.Tensor: shape=(5,), dtype=int64, numpy=array([19, 26, 28, 35, 39], dtype=int64)>)
In NumPy, I can access it using:
error[i, j] = error[i, j] * 5
What I want is to replace with new values of error in a specified position after executing the above code and get values like:
Error values: <tf.Tensor: shape=(10, 1000), dtype=float64, numpy= array([[-10, 0, 16*, ..., 3, -1.5, -2.5], ..., [ 2.5, 16* , 13*, ..., 11*, 3.5, -0.5]])>
But when I try to execute this as Tensors, it gives the following error:
TypeError Traceback (most recent call last)
Input In [193], in <cell line: 1>()
----> 1 error[i, j]
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\tensorflow\python\util\traceback_utils.py:153, in filter_traceback.<locals>.error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.__traceback__)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\tensorflow\python\ops\array_ops.py:899, in _check_index(idx)
894 dtype = getattr(idx, "dtype", None)
895 if (dtype is None or dtypes.as_dtype(dtype) not in _SUPPORTED_SLICE_DTYPES or
896 idx.shape and len(idx.shape) == 1):
897 # TODO(slebedev): IndexError seems more appropriate here, but it
898 # will break `_slice_helper` contract.
--> 899 raise TypeError(_SLICE_TYPE_ERROR ", got {!r}".format(idx))
TypeError: Only integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got <tf.Tensor: shape=(4797,), dtype=int64, numpy=array([ 0, 0, 0, ..., 26, 26, 26], dtype=int64)>
I tried using other functions provided by TensorFlow too, but those did not work.
CodePudding user response:
You can't use indices assignement in tensorflow, but this can be overcome using tf.gather_nd and tf.scatter_nd.
Here is an example with dummy input data
y_true = tf.random.uniform(shape=(10, 6), minval=0, maxval=10, dtype=tf.int32)
mask = tf.where(y_true > 5)
y_mask = tf.scatter_nd(mask, tf.gather_nd(y_true, mask), shape=tf.cast(tf.shape(y_true), tf.int64))
print(y_true)
print(y_mask)
print(y_mask*4 y_true)
outputs
y_true:
tf.Tensor(
[[9 9 0 1 4 2]
[9 8 6 3 9 8]
[9 0 1 4 7 7]
[4 8 6 3 4 1]
[9 1 8 9 3 9]
[2 8 3 4 9 2]
[2 5 7 5 2 2]
[6 7 6 7 9 4]
[2 8 9 5 2 1]
[7 4 1 9 7 9]], shape=(10, 6), dtype=int32)
y_mask:
tf.Tensor(
[[9 9 0 0 0 0]
[9 8 6 0 9 8]
[9 0 0 0 7 7]
[0 8 6 0 0 0]
[9 0 8 9 0 9]
[0 8 0 0 9 0]
[0 0 7 0 0 0]
[6 7 6 7 9 0]
[0 8 9 0 0 0]
[7 0 0 9 7 9]], shape=(10, 6), dtype=int32)
y_mask*4 y_true:
tf.Tensor(
[[45 45 0 1 4 2]
[45 40 30 3 45 40]
[45 0 1 4 35 35]
[ 4 40 30 3 4 1]
[45 1 40 45 3 45]
[ 2 40 3 4 45 2]
[ 2 5 35 5 2 2]
[30 35 30 35 45 4]
[ 2 40 45 5 2 1]
[35 4 1 45 35 45]], shape=(10, 6), dtype=int32)
Another solution is to use directly the mask and cast it to int32 or float32 (or other):
mask_float = tf.cast(y_true>5, tf.int32)
print(mask_float*y_true)
y_mask v2:
tf.Tensor(
[[9 9 0 0 0 0]
[9 8 6 0 9 8]
[9 0 0 0 7 7]
[0 8 6 0 0 0]
[9 0 8 9 0 9]
[0 8 0 0 9 0]
[0 0 7 0 0 0]
[6 7 6 7 9 0]
[0 8 9 0 0 0]
[7 0 0 9 7 9]], shape=(10, 6), dtype=int32)
CodePudding user response:
If I understand your comment correctly, the following code should be appropriate. It multiplies by alpha (=2. here) all error values greater than 5
# generate some error tensor
error = tf.random.uniform(shape=(10, 3), minval=0, maxval=10, dtype=tf.float64)
print('input error')
print(error)
float_mask = tf.cast(error>5, dtype=tf.float64)
print('mask')
print(float_mask)
alpha = 2.
print('gain = %f' % alpha)
error = error (alpha-1.)*float_mask*error
print('output error')
print(error)
which gives:
input error
tf.Tensor( [[9.47020833 6.21211945 2.56257082] [8.2855179
6.23372048 9.39559957] [5.2926297 2.62602144 4.44665184] [6.49200992 7.09389259 1.04311547] [9.39402112 2.68713794 7.71738653] [6.4853496 2.99997236 9.88983946] [3.57130888 5.73827016 5.91022104] [2.58102132 4.01791191 3.19829238] [9.28263857 4.73230455 6.24950981] [0.38713425 3.56589859 8.74955686]], shape=(10, 3), dtype=float64)
mask
tf.Tensor( [[1. 1. 0.] [1. 1. 1.] [1. 0. 0.] [1. 1. 0.] [1.
0. 1.] [1. 0. 1.] [0. 1. 1.] [0. 0. 0.] [1. 0. 1.] [0. 0. 1.]], shape=(10, 3), dtype=float64)
gain
gain = 2.000000
output error
tf.Tensor( [[18.94041665 12.42423889 2.56257082] [16.5710358 12.46744096
18.79119913] [10.58525941 2.62602144 4.44665184] [12.98401983 14.18778517 1.04311547] [18.78804224 2.68713794 15.43477305] [12.9706992 2.99997236 19.77967893] [ 3.57130888 11.47654031
11.82044208] [ 2.58102132 4.01791191 3.19829238] [18.56527714 4.73230455 12.49901962] [ 0.38713425 3.56589859 17.49911371]], shape=(10, 3), dtype=float64)