Home > Mobile >  How to train a Keras Model with L1-norm reconstruction loss function?
How to train a Keras Model with L1-norm reconstruction loss function?

Time:12-15

I am currently building an auto-encoder for the MNIST dataset with Kears, here is my code:

import all the dependencies
from keras.layers import Dense,Conv2D,MaxPooling2D,UpSampling2D
from keras import Input, Model
from keras.datasets import mnist
import numpy as np
import matplotlib.pyplot as plt

encoding_dim = 15 
input_img = Input(shape=(784,))
# encoded representation of input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# decoded representation of code 
decoded = Dense(784, activation='sigmoid')(encoded)
# Model which take input image and shows decoded images
autoencoder = Model(input_img, decoded)

# This model shows encoded images
encoder = Model(input_img, encoded)
# Creating a decoder model
encoded_input = Input(shape=(encoding_dim,))
# last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))

autoencoder.compile(optimizer='adam', loss='binary_crossentropy')

the last step is the compiling step, but I need to use a L1-norm reconstruction loss function. From Keras losses description, it seems they don't have this function. How can I apply a L1-norm reconstruction loss function to the autoencoder.compile() function? Thank you!

CodePudding user response:

In loss function, we refer to the expected error values. Hence, for L1-norm you can use MAE (mean absolute error) with the name of mean_absolute_error. So, you can rewrite the last line of your code as follow:

autoencoder.compile(optimizer='adam', loss='mean_absolute_error')
  • Related