Home > Blockchain >  ImageDataGenerator that outputs patches instead of full image
ImageDataGenerator that outputs patches instead of full image

Time:10-30

I have a big dataset that I want to use to train a CNN with Keras (too big to load it in memory). I always train using ImageDataGenerator.flow_from_dataframe, as I have my images across different directories, as shown below.

datagen = ImageDataGenerator(
    rescale=1./255.
)
train_gen=datagen.flow_from_dataframe(
    dataframe=train_df),
    x_col="filepath",
    class_mode="input",
    shuffle=True,
    seed=1)

However, this time I don't want to use my full images, but random patches of the images instead, i.e., I want to choose a random image and take a random patch of 32x32 of that image each time. How can I do this?

I thought of using tf.extract_image_patches and sklearn.feature_extraction.image.extract_patches_2d, but I don't know if it is possible to integrate these to the flow_from_dataframe.

Any help would be appreciated.

CodePudding user response:

You could try using a preprocessing function in your ImageDataGenerator combined with tf.image.extract_patches:

import tensorflow as tf
import matplotlib.pyplot as plt

BATCH_SIZE = 32

def get_patches():
    def _get_patches(image):
            image = tf.expand_dims(image,0)
            patches = tf.image.extract_patches(images=image,
                                    sizes=[1, 32, 32, 1],
                                    strides=[1, 32, 32, 1],
                                    rates=[1, 1, 1, 1],
                                    padding='VALID')

            patches = tf.reshape(patches, (1, 256, 256, 3))
            return patches
    return _get_patches

def reshape_data(images, label):
      ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True)
      for b in tf.range(BATCH_SIZE):
        i = tf.random.uniform((), maxval=int(256/32), dtype=tf.int32)
        j = tf.random.uniform((), maxval=int(256/32), dtype=tf.int32)
        patched_image = tf.reshape(images[b], (8, 8, 3072))
        ta = ta.write(ta.size(), tf.reshape(patched_image[i, j], shape=(32, 32 ,3)))
      return ta.stack(), labels

preprocessing = get_patches()
flowers = tf.keras.utils.get_file(
    'flower_photos',
    'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
    untar=True)

img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, rotation_range=20, preprocessing_function = preprocessing)


ds = tf.data.Dataset.from_generator(
    lambda: img_gen.flow_from_directory(flowers, batch_size=BATCH_SIZE, shuffle=True)),
    output_types=(tf.float32, tf.float32))

ds = ds.map(reshape_data)
images, _ = next(iter(ds.take(1)))

image = images[0] # (32, 32, 3)

plt.imshow(image.numpy())

The problem is that the preprocessing_function of the ImageDataGenerator expects the same output shape as the input shape. I therefore first create the patches and construct the same output shape of the original image based on the patches. Later, in the method reshape_data, I reshape the images from (256, 256, 3) to (8, 8, 3072), extract a random patch and then return it with the shape (32, 32, 3).

  • Related