Home > other >  In neural networks, activation is applied by a function or layer?
In neural networks, activation is applied by a function or layer?

Time:08-26

I am using the Functional API of the TensorFlow/Keras for building a CNN model. In this model, I am trying to apply a custom activation (with constraints) on the output layer.

After going through various resources (1, 2), I am confused about whether the activation needs to be applied by a simple python function or layer.

I tried implementing it by subclassing the Layer class as follows,

class MapToBounds(layers.Layer):

    def __init__(self, lower_bound, upper_bound, **kwargs):
        super().__init__(**kwargs)
        self.lower_bound = lower_bound
        self.upper_bound = upper_bound

    def call(self, inputs, *args, **kwargs):
        return tf.add(self.lower_bound, tf.multiply(tf.sigmoid(inputs), self.upper_bound))
    

and called it in the model as

x = MapToBounds(lower_bound=-3.0, upper_bound=20.0)(x)

where x is the previous layer instance.

My questions are:

  1. Is it the right approach?
  2. In this approach, do I have to set training=False?
  3. Is there any simple way I can implement it with a python function instead of a layer?

CodePudding user response:

Your approach is actually quite clean. You could also just use tf.add directly on a tensor but a custom layer is more elegant (IMO):

import tensorflow as tf

lower_bound=-3.0
upper_bound=20.0

inputs = tf.keras.layers.Input((5, ))
x = tf.keras.layers.Dense(20)(inputs)
outputs = tf.add(lower_bound, tf.multiply(tf.sigmoid(x), upper_bound))
model = tf.keras.Model(inputs, outputs)

model.summary()

Conclusion: Both solutions are valid. Regarding the flag training=False, I do not think you have to worry about it unless you want your activation function to act differently during training and inference.

  • Related