Home > Back-end >  Discriminative layer training issue with callback ReduceLROnPlateau
Discriminative layer training issue with callback ReduceLROnPlateau

Time:11-13

I am trying to use tensorflow addon's multioptimizer for discriminative layer training, different learning rates for different layers, but it does not work with the callback ReduceLROnPlateau.

from tensorflow.keras.callbacks import ReduceLROnPlateau
reduce_lr = ReduceLROnPlateau(patience=5, min_delta=1e-4, min_lr=1e-7, verbose=0)
with tpu_strategy.scope():
  roberta_model = create_model(512)
  optimizers = [
        AdamWeightDecay(learning_rate=0.00001, weight_decay_rate=0.00001),
        AdamWeightDecay(learning_rate=0.0001, weight_decay_rate=0.0001)
    ]
    
    # specifying the optimizers and layers in which it will operate
  optimizers_and_layers = [
        (optimizers[0], roberta_model.layers[:3]),
        (optimizers[1], roberta_model.layers[3:])
    ]

    # Using Multi Optimizer from Tensorflow Addons
  opt = tfa.optimizers.MultiOptimizer(optimizers_and_layers)
  roberta_model.compile(optimizer=opt, 
  loss=tf.keras.losses.CategoricalCrossentropy(label_smoothing=0.1), metrics=["accuracy"])
 
history=roberta_model.fit(train, epochs=50, validation_data=val, callbacks=[reduce_lr])

At the end of the first epoch it produces this error:

AttributeError: 'MultiOptimizer' object has no attribute 'lr'

It works fine without the ReduceLROnPlateau callback.

I tried several things to solve this where the last attempt was to modify the callback - writing my own reduce learning rate on plateau callback. But this is far beyond my coding skills. I have commented where i made a couple of changes to the orginal callback. I tried like this:

class My_ReduceLROnPlateau(tf.keras.callbacks.Callback):

  def __init__(self,
               monitor='val_loss',
               factor=0.1,
               patience=10,
               verbose=0,
               mode='auto',
               min_delta=1e-4,
               cooldown=0,
               min_lr=0,
               **kwargs):
    super(My_ReduceLROnPlateau, self).__init__()

    self.monitor = monitor
    if factor >= 1.0:
      raise ValueError(
          f'ReduceLROnPlateau does not support a factor >= 1.0. Got {factor}')
    if 'epsilon' in kwargs:
      min_delta = kwargs.pop('epsilon')
      logging.warning('`epsilon` argument is deprecated and '
                      'will be removed, use `min_delta` instead.')
    self.factor = factor
    self.min_lr = min_lr
    self.min_delta = min_delta
    self.patience = patience
    self.verbose = verbose
    self.cooldown = cooldown
    self.cooldown_counter = 0  # Cooldown counter.
    self.wait = 0
    self.best = 0
    self.mode = mode
    self.monitor_op = None

    self._reset()

  def _reset(self):
    """Resets wait counter and cooldown counter.
    """
    if self.mode not in ['auto', 'min', 'max']:
      logging.warning('Learning rate reduction mode %s is unknown, '
                      'fallback to auto mode.', self.mode)
      self.mode = 'auto'
    if (self.mode == 'min' or
        (self.mode == 'auto' and 'acc' not in self.monitor)):
      self.monitor_op = lambda a, b: np.less(a, b - self.min_delta)
      self.best = np.Inf
    else:
      self.monitor_op = lambda a, b: np.greater(a, b   self.min_delta)
      self.best = -np.Inf
    self.cooldown_counter = 0
    self.wait = 0

  def on_train_begin(self, logs=None):
    self._reset()

  def on_epoch_end(self, epoch, logs=None):
    logs = logs or {}
    logs['lr'] = backend.get_value(self.model.optimizer[1].lr)
    current = logs.get(self.monitor)
    if current is None:
      logging.warning('Learning rate reduction is conditioned on metric `%s` '
                      'which is not available. Available metrics are: %s',
                      self.monitor, ','.join(list(logs.keys())))

    else:
      if self.in_cooldown():
        self.cooldown_counter -= 1
        self.wait = 0

      if self.monitor_op(current, self.best):
        self.best = current
        self.wait = 0
      elif not self.in_cooldown():
        self.wait  = 1
        if self.wait >= self.patience:

      # Here below i tried to subscript the self.model.optimizer
      #, guessing that each pointed to one of the optimzers.
      # And using the same code as in the original ReduceLROnPlateau to 
      # update the optimizers. 

          old_lr1 = backend.get_value(self.model.optimizer[1].lr)
          old_lr0 = backend.get_value(self.model.optimizer[0].lr)
          if old_lr1 > np.float32(self.min_lr):
            new_lr1 = old_lr1 * self.factor
            new_lr1 = max(new_lr1, self.min_lr)
            backend.set_value(self.model.optimizer[1].lr, new_lr1)
            new_lr0 = old_lr0 * self.factor
            new_lr0 = max(new_lr0, self.min_lr)
            backend.set_value(self.model.optimizer[0].lr, new_lr0)
            if self.verbose > 0:
              io_utils.print_msg(
                  f'\nEpoch {epoch  1}: '
                  f'ReduceLROnPlateau reducing learning rate to {new_lr0} and {new_lr1}.')
            self.cooldown_counter = self.cooldown
            self.wait = 0

  def in_cooldown(self):
    return self.cooldown_counter > 0

Then i created the callback

reduce_lr = My_ReduceLROnPlateau(patience=5, min_delta=1e-4, min_lr=1e-7, verbose=0)

and started to train again. At the end of the first epoch i got the following error.

TypeError: 'MultiOptimizer' object is not subscriptable

i.e. you cant do this self.model.optimizer[1], self.model.optimizer[0].

So my question is how to solve this? I.e using discriminative layer training with ReduceLROnPlateau. Either via some other method or modify my attempt of creating a new callback class.

Here is a link to the orginal ReduceLROnPlateau callback, i.e. without the few changes i did above in my custom callback.

A solution would maybe be possible using this:

Note: Currently, tfa.optimizers.MultiOptimizer does not support callbacks that modify optimizers. However, you can instantiate optimizer layer pairs with tf.keras.optimizers.schedules.LearningRateSchedule instead of a static learning rate

CodePudding user response:

Looking in the code of tfa.optimizers.MultiOptimizer (in the method create_optimizer_spec, it seems that optimizers can be accessed via self.model.optimizer.optimizer_specs[0]["optimizer"] and self.model.optimizer.optimizer_specs[1]["optimizer"] to change the learning rate (which is why self.model.optimizer[1] raises an error). Then your custom callback seems to work.

  • Related