Home > Net >  How to create my own preprocessing layer in Tensorflow in python?
How to create my own preprocessing layer in Tensorflow in python?

Time:04-28

I have a specific sequential model for preprocessing data as follows:

data_transformation = tf.keras.Sequential([layers.experimental.preprocessing.RandomContrast(factor=(0.7,0.9)),layers.GaussianNoise(stddev=tf.random.uniform(shape=(),minval=0, maxval=1)), layers.experimental.preprocessing.RandomRotation(factor=0.1, fill_mode='reflect', interpolation='bilinear', seed=None, name=None, fill_value=0.0), layers.experimental.preprocessing.RandomZoom(height_factor=(0.1,0.2), width_factor=(0.1,0.2), fill_mode='reflect', interpolation='bilinear', seed=None, name=None, fill_value=0.0),])

However, I would like to add my own preprocessing layer, that is defined by the Python function below:

import tensorflow as tf
import random

def my_random_contrast(image_to_be_transformed, contrast_factor):

    #build the contrast factor 
    selected_contrast_factor=random.uniform(1-contrast_factor, 1 contrast_factor)
    
    selected_contrast_factor_c1=selected_contrast_factor
    selected_contrast_factor_c2=selected_contrast_factor-0.01
    selected_contrast_factor_c3=selected_contrast_factor-0.02
    
    
    image_to_be_transformed=image_to_be_transformed.numpy()
    image_to_be_transformed[0,:,:]=((image_to_be_transformed[0,:,:]-tf.reduce_mean(image_to_be_transformed[0,:,:]))*selected_contrast_factor_c1) tf.reduce_mean(image_to_be_transformed[0,:,:])
    image_to_be_transformed[1,:,:]=((image_to_be_transformed[1,:,:]-tf.reduce_mean(image_to_be_transformed[1,:,:]))*selected_contrast_factor_c2) tf.reduce_mean(image_to_be_transformed[1,:,:])
    image_to_be_transformed[2,:,:]=((image_to_be_transformed[2,:,:]-tf.reduce_mean(image_to_be_transformed[2,:,:]))*selected_contrast_factor_c3) tf.reduce_mean(image_to_be_transformed[2,:,:])
    
    image_to_be_transformed=tf.convert_to_tensor(image_to_be_transformed)
    return image_to_be_transformed


x=tf.random.uniform(shape=[3,224,224], minval=0, maxval=1, dtype=tf.float32)
y=my_random_contrast(x, 0.5)

How can I do that with TensorFlow? as a new preprocessing layer that will receive inputs and outputs from other layers, should I have to guarantee that the input and outputs are of a given type?

CodePudding user response:

You need to make sure that your method works with a batch dimension if you plan to preprocess more than one image at a time. Also, it should be able to work without any numpy operations in order to run in graph mode. Then, you can write your own custom layer or a simple lambda layer:

import tensorflow as tf
import matplotlib.pyplot as plt
import pathlib

dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)
data_dir = pathlib.Path(data_dir)

ds = tf.keras.utils.image_dataset_from_directory(
  data_dir,
  image_size=(180, 180),
  batch_size=32, shuffle=False)

def my_random_contrast(image_to_be_transformed, contrast_factor):

    #build the contrast factor 
    selected_contrast_factor=tf.random.uniform((), minval=1 - contrast_factor, maxval=1   contrast_factor)
    
    selected_contrast_factor_c1=selected_contrast_factor
    selected_contrast_factor_c2=selected_contrast_factor-0.01
    selected_contrast_factor_c3=selected_contrast_factor-0.02
    
    c0 = ((image_to_be_transformed[:,:,:, 0]-tf.reduce_mean(image_to_be_transformed[:,:,:, 0]))*selected_contrast_factor_c1) tf.reduce_mean(image_to_be_transformed[:,:,:, 0])
    c1 = ((image_to_be_transformed[:,:,:, 1]-tf.reduce_mean(image_to_be_transformed[:,:,:, 1]))*selected_contrast_factor_c2) tf.reduce_mean(image_to_be_transformed[:,:,:, 1])
    c2 = ((image_to_be_transformed[:,:,:, 2]-tf.reduce_mean(image_to_be_transformed[:,:,:, 2]))*selected_contrast_factor_c3) tf.reduce_mean(image_to_be_transformed[:,:,:, 2])

    image_to_be_transformed = tf.concat([c0[..., tf.newaxis], image_to_be_transformed[:,:,:, 1:]], axis=-1)
    image_to_be_transformed = tf.concat([image_to_be_transformed[:,:,:, 0][..., tf.newaxis], c1[..., tf.newaxis], image_to_be_transformed[:,:,:, 2][..., tf.newaxis]], axis=-1)
    image_to_be_transformed = tf.concat([image_to_be_transformed[:,:,:, :2], c2[..., tf.newaxis]], axis=-1)

    return image_to_be_transformed


images, _ = next(iter(ds.take(1)))
image = images[2]
plt.figure()
f, axarr = plt.subplots(1,2) 
axarr[0].imshow(image / 255)

# After preprocessing dataset:
images, _ = next(iter(ds.map(lambda x, y: (my_random_contrast(x, 0.8), y)).take(1)))
image = images[2]
axarr[1].imshow(image / 255)

enter image description here

Example usage:

model = tf.keras.Sequential()
model.add(tf.keras.layers.Lambda(lambda x: my_random_contrast(x, 0.8)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(1))
model.compile(optimizer='adam', loss='mse')
model.fit(tf.random.normal((50, 64, 64, 3)), tf.random.normal((50, 1)))

CodePudding user response:

You need to create a custom layer by subclass the Layer class, an example can be found here Making new Layers and Models via subclassing. Many examples can be found on the web.

  • Related