Home > OS >  how to define a modified leaky ReLU - TensorFlow
how to define a modified leaky ReLU - TensorFlow

Time:09-27

I would like to use the leaky-ReLu function with minimization rather than maximization as my activation for a dense layer. In other words, I want my activation to be f(x) = min{x, \alpha x }. I first define a method as shown below.

def new_leaky_relu(x, alpha):
    part_1 = tf.cast(tf.math.greater_equal(0.0, x), dtype='float32')   
    part_2 = tf.cast(tf.math.greater_equal(x, 0.0), dtype='float32') 
    return (part_1*x)   (x*part_2*k) 

When I test it on a simple model, I do receive an error.

model = tf.keras.Sequential([ 
tf.keras.layers.Flatten(input_shape=(124,))])  
model.add(tf.keras.layers.Dense(256,activation=new_leaky_relu(alpha=0.1)))
new_leaky_relu() missing 1 required positional argument: 'x'

How can I ensure that it is an activation function and I don't have to pass the input during compiling the model? Also, is the way that I constructed my activation function efficient, or is there a better way?

I also used the suggestions shared in another post. See How do you create a custom activation function with Keras?

from keras.utils.generic_utils import get_custom_objects
from keras.layers import Activation
get_custom_objects().update({'custom_activation': Activation(new_leaky_relu)})

model = tf.keras.Sequential([ 
tf.keras.layers.Flatten(input_shape=(124,))])  
model.add((tf.keras.layers.Dense(256,Activation(new_leaky_relu(alpha=0.1))))

CodePudding user response:

You can try as follows:

import tensorflow as tf 

def custom_leaky_relu(alpha=0.0):        
    def new_leaky_relu(x):
        part_1 = tf.cast(tf.math.greater_equal(0.0, x), dtype='float32')   
        part_2 = tf.cast(tf.math.greater_equal(x, 0.0), dtype='float32') 
        return (part_1*x)   (x*part_2) 
    return new_leaky_relu

model = tf.keras.Sequential([ 
tf.keras.layers.Flatten(input_shape=(124,))])  
model.add(tf.keras.layers.Dense(256, activation=custom_leaky_relu(alpha=0.1)))
  • Related