Home > other >  Tensorflow: met a more wonderful work loss is nan.
Tensorflow: met a more wonderful work loss is nan.

Time:10-18

Use tensorflow training network, training every first seventh iteration epoch, train loss will become a nan, but also in the change, "train" acc no drops rapidly or 0 also seems in training (output value of the specific network did not test, but according to the change of the acc, network output should have a definite number), changing the vector to 0, the same problem is still the iteration will 7 to nan, use the optimizer is from making on radam, then switch to Adam, problem solving, but use the same radam in another network training of nan did not happen, when the normal training, touch mind, where wrong? , ask for help!

CodePudding user response:

PIP install tensorflow - radam
You are using the RAdam yao...

CodePudding user response:

reference 1st floor Zhu Mingde response:
PIP install tensorflow - radam
You are using the RAdam yao...

I use is not this, is directly from making others to write the source code (from RAam import RAdamOptimier), it should and you're not a, is the source code is there a problem? I will try to for you to say this,
  • Related