Home > database >  How to change Learning rate in Tensorflow after a batch end?
How to change Learning rate in Tensorflow after a batch end?

Time:10-29

I need to create a class for searching the optimal learning rate for the model, incrementing the value of the learning rate by %5 in each batch. I have see the on_train_batch_end() callback but i am not able to set it.

CodePudding user response:

see the source to tf.keras.callbacks.ReduceLROnPlateau.

https://github.com/keras-team/keras/blob/v2.6.0/keras/callbacks.py#L2581-L2701

The secret command is

tf.keras.backend.set_value(self.model.optimizer.lr)

You can't assign it, nor can you use tf.Variable.assign(), because in Keras, it could be member if build hasn't occurred yet, or after the build it may be a variable.

CodePudding user response:

Reduce Learning rate on plateau only adjust the learning rate at the end of an epoch. It does not due so at the end of a batch. To do that you need to create a custom callback. If I understand you correctly you want to reduce the learning rate by 5% at the end of each batch. The code below will do that for you. In the callback model is the name of your compiled model. freq is an integer that determine how often the learning rate is adjusted. If freq=1 it will be adjusted at the end of every batch. If freq=2 it will be adjust on every other batch etc. reduction_pct is a float. It is the percent the learning rate will be reduced by. Verbose is a boolean. If verbose=True, a print out will occur each time the learning rate is adjusted showing the LR used for the just completed batch and the LR that will be used for the next batch. If verbose=False no printout is generated.

class ADJLR_ON_BATCH(keras.callbacks.Callback):
    def __init__ (self, model, freq, reduction_pct, verbose):
        super(ADJLR_ON_BATCH, self).__init__()
        self.model=model
        self.freq=freq
        self.reduction_pct =reduction_pct
        self.verbose=verbose
        self.adj_batch=freq
        self.factor= 1.0-reduction_pct * .01
    def on_train_batch_end(self, batch, logs=None):
        lr=float(tf.keras.backend.get_value(self.model.optimizer.lr)) # get the current learning rate
        if batch   1 == self.adj_batch:
            new_lr=lr * self.factor
            tf.keras.backend.set_value(self.model.optimizer.lr, new_lr) # set the learning rate in the optimizer
            self.adj_batch  =self.freq
            if verbose:
                print('\nat the end of batch ',batch   1, ' lr was adjusted from ', lr, ' to ', new_lr)

Below is an example of use of the callback with the values I believe you wish to use

model=your_model_name # variable name of your model
reduction_pct=5.0 # reduce lr by 5%
verbose=True  # print out each time the LR is adjusted
frequency=1   # adjust LR at the end of every batch
callbacks=[ADJLR_ON_BATCH(model, frequency, reduction_pct, verbose)]

Remember to include callbacks=callbacks in model.fit. Below is a sample of the resulting printout starting with an LR of .001

at the end of batch  1  lr was adjusted from  0.0010000000474974513  to  0.0009500000451225787
  1/374 [..............................] - ETA: 1:14:55 - loss: 9.3936 - accuracy: 0.3333
at the end of batch  2  lr was adjusted from  0.0009500000160187483  to  0.0009025000152178108

at the end of batch  3  lr was adjusted from  0.0009025000035762787  to  0.0008573750033974647
  3/374 [..............................] - ETA: 25:04 - loss: 9.1338 - accuracy: 0.4611  
at the end of batch  4  lr was adjusted from  0.0008573749801144004  to  0.0008145062311086804
  • Related