Home > Software engineering >  Use tf.nn.local_response_normalization in keras layers
Use tf.nn.local_response_normalization in keras layers

Time:08-08

I am using keras to add layers, for example:

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', padding="same", input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu', padding="same", input_shape=(16, 16, 32)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu', padding="same"))
model.add(BatchNormalization())
model.add(layers.Dropout(0.5))

model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(BatchNormalization())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(10))

Now I am implementing LRN. However, the keras library does not have LRN to my limited knowledge. The old tf.nn library does have a LRN function called tf.nn.local_response_normalization.

Is it possible to mix tf.nn with keras?

CodePudding user response:

Yes, tf.nn.local_response_normalization can be used in a lambda layer. See the code below:

...
model.add(BatchNormalization())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(10))
model.add(layers.Lambda(tf.nn.local_response_normalization))
...
  • Related