Home > database >  No gradients provided for any variable: (['dense_15/kernel:0', 'dense_15/bias:0'
No gradients provided for any variable: (['dense_15/kernel:0', 'dense_15/bias:0'

Time:12-02

Relatively new to deeplearning, currently trying to implement a basic model with a custom loss function. The custom loss function is the main part of the code, compared to other parameters of the model. 1.I've attached the kind of loss functions used. Loss function used in the code Similar loss function 2.The loss function needs to iterate through the train and predict values and the loss is calculated correspondingly for negative and positive error. After dealing with the tensor related errors, like, "iterating through tensors" this current loss function is works without raising input related problems. 3.The loss function works as standalone.

do suggest any changes to the loss function, and any possible solutions to the current problem. Ive attached the code for reference.

regarding the loss function, i'm not quite familiar with tf.scan, tf.map, keras.backend and such related functions yet which were suggested in many answers when dealing with loss function errors, but since the functions seems to be versatile and able to take data without an issue now, solution to the "current gradients problem" would be highly preferred.

Ive attached the code

`

def custom_loss_tensor(y_train,y_pred):
    cs = 10.0
    ch=1.0
    loss = 0
    y_train_t = tf.convert_to_tensor(y_train)
    y_pred_t = tf.convert_to_tensor(y_pred)
    num_train = y_train_t.numpy()
    num_pred = y_pred_t.numpy()
    l=len(num_train)
    for i in range(l):
        err = num_pred[i]-num_train[i]
        if err < 0:
            loss = loss   (10*abs(err))
        else:
            loss = loss   (1*abs(err))
    return loss
model = Sequential() 
    model.add(Dense(43, kernel_initializer = 'normal', activation = 'relu'))
    model.add(Dense(64, activation = 'relu'))
    model.add(Dense(1))
model.compile(
        loss = custom_loss_tensor, 
        optimizer = 'RMSprop', 
        metrics = keras.metrics.MeanAbsoluteError(),
        run_eagerly=True)
`training = model.fit(
       x_train, y_train,    
       batch_size=128, 
       epochs = 10, 
       verbose = 1
ValueError                                Traceback (most recent call last)
<ipython-input-60-dac93a08cc41> in <module>
      3    batch_size=128,
      4    epochs = 1,
----> 5    verbose = 1
      6 )

E:\Anaconda\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs)
     68             # To get the full stack trace, call:
     69             # `tf.debugging.disable_traceback_filtering()`
---> 70             raise e.with_traceback(filtered_tb) from None
     71         finally:
     72             del filtered_tb

E:\Anaconda\lib\site-packages\keras\optimizers\optimizer_v2\utils.py in filter_empty_gradients(grads_and_vars)
     76         variable = ([v.name for _, v in grads_and_vars],)
     77         raise ValueError(
---> 78             f"No gradients provided for any variable: {variable}. "
     79             f"Provided `grads_and_vars` is {grads_and_vars}."
     80         )

ValueError: No gradients provided for any variable: (['dense_21/kernel:0', 'dense_21/bias:0', 'dense_22/kernel:0', 'dense_22/bias:0', 'dense_23/kernel:0', 'dense_23/bias:0'],). Provided `grads_and_vars` is ((None, <tf.Variable 'dense_21/kernel:0' shape=(43, 43) dtype=float32, numpy=
array([[-0.03749189,  0.17271727, -0.24716692, ..., -0.2605915 ,
        -0.16543186,  0.18584403],
       [-0.07696107, -0.02638303, -0.07502724, ...,  0.00748128,
         0.02917111, -0.00045595],
       [-0.15296972, -0.05007204, -0.09662418, ..., -0.10381483,
         0.1687066 ,  0.04201859],
       ...,
       [-0.2555289 ,  0.24466953,  0.19306567, ...,  0.09167928,
         0.2091296 ,  0.01142609],
       [-0.03024916, -0.17035547, -0.10503584, ...,  0.22953227,
        -0.06455661, -0.13004614],
       [-0.16971609, -0.04739657,  0.23525235, ..., -0.06847623,
         0.20173371,  0.12631011]], dtype=float32)>), (None, <tf.Variable 'dense_21/bias:0' shape=(43,) dtype=float32, numpy=
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)>), (None, <tf.Variable 'dense_22/kernel:0' shape=(43, 64) dtype=float32, numpy=
array([[-0.01001556,  0.18438679,  0.0746735 , ..., -0.20779023,
        -0.175497  ,  0.10676223],
       [-0.147724  , -0.05647631,  0.20822546, ..., -0.0742207 ,
         0.13220128,  0.18023628],
       [ 0.08255652, -0.15501451, -0.13425983, ..., -0.12613183,
        -0.10449411, -0.09487195],
       ...,
       [ 0.18172857,  0.08704039,  0.02496117, ..., -0.08698638,
         0.00402144,  0.12817398],
       [ 0.00345939,  0.07263863,  0.15878046, ..., -0.01657443,
        -0.12882826, -0.15950364],
       [ 0.06317642, -0.13567862,  0.03992519, ..., -0.11793269,
         0.22939149,  0.17594114]], dtype=float32)>), (None, <tf.Variable 'dense_22/bias:0' shape=(64,) dtype=float32, numpy=
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)>), (None, <tf.Variable 'dense_23/kernel:0' shape=(64, 1) dtype=float32, numpy=
array([[-0.2474489 ],
       [ 0.27486765],
       [-0.27261525],
       [-0.17509465],
       [ 0.09408   ],
       [-0.02667353],
       [-0.0183523 ],
       [ 0.26157188],
       [-0.17779922],
       [ 0.20776463],
       [ 0.05283326],
       [ 0.30198514],
       [-0.04326349],
       [-0.21002822],
       [-0.14250202],
       [ 0.19137284],
       [ 0.00679907],
       [ 0.1577428 ],
       [-0.2694474 ],
       [-0.11011858],
       [ 0.27931225],
       [-0.23548083],
       [-0.16819511],
       [-0.01075685],
       [ 0.21107608],
       [ 0.22087872],
       [ 0.11126944],
       [ 0.04594085],
       [ 0.1345087 ],
       [ 0.14656761],
       [-0.28515455],
       [ 0.14429107],
       [ 0.14043242],
       [-0.09573163],
       [ 0.19628167],
       [ 0.1347841 ],
       [-0.22662674],
       [-0.25981647],
       [ 0.00762352],
       [-0.20713952],
       [ 0.17875996],
       [ 0.27148038],
       [-0.0861142 ],
       [-0.17500569],
       [ 0.28790957],
       [-0.02680674],
       [ 0.14458871],
       [-0.09571315],
       [ 0.2938726 ],
       [-0.1645372 ],
       [ 0.21122003],
       [-0.1245351 ],
       [ 0.02794001],
       [-0.1927064 ],
       [-0.00268784],
       [ 0.18155274],
       [-0.07574154],
       [-0.2926125 ],
       [-0.05449736],
       [ 0.16933608],
       [-0.03486991],
       [-0.09897752],
       [-0.00102338],
       [-0.09188385]], dtype=float32)>), (None, <tf.Variable 'dense_23/bias:0' shape=(1,) dtype=float32, numpy=array([0.], dtype=float32)>)).

CodePudding user response:

You don't need to convert your tensors to numpy

def custom_loss_tensor(y_train,y_pred):
    cs = 10.0
    ch=1.0
    loss = 0
    for i in range(len(y_train)):
        err = float(y_pred[i])-float(y_train[i])
        if err < 0:
            loss = loss   (10*abs(err))
        else:
            loss = loss   (1*abs(err))
    return float(loss)
  • Related