Home > Net >  How can I load tf.data.dataset object into an autoencoder?
How can I load tf.data.dataset object into an autoencoder?

Time:01-12

I have been struggling with this issue for weeks now... I more or less try to reproduce this code: https://github.com/mostafaibrahim17/Whole-Image-Slides-Unsupervised-Categorization/blob/master/Autoencoders/Convolutional Autoencoders/Basic Convolutional Autoencoder.ipynb

Unlike this example where they load images as array :

## Data loading 
trainData = "../../../autoenctrain/train"
testData = "../../../autoenctrain/test"

new_train = []
new_test = []

for filename in os.listdir(trainData):
    if filename.endswith(".tif"):
        image = Image.open(os.path.join(trainData, filename)) 
        new_train.append(np.asarray( image, dtype="uint8" ))

for filename in os.listdir(testData):
    if filename.endswith(".tif"):
        image = Image.open(os.path.join(testData, filename)) 
        new_test.append(np.asarray( image, dtype="uint8" ))

, I have a lot of big images (256, 256, 3) and I would like to load images from directory with function tf.keras.utils.image_dataset_from_directory :

train_ds = tf.keras.utils.image_dataset_from_directory(
  trainData,
    label_mode=None,
    color_mode = 'rgb',
    batch_size=32,
  image_size=(256,256))

In this example, label_mode=None because images are in subdirectories and I don't want that images have the label corresponding to their subdirectory.

I modified the autoencoder in order to adapt it to my images :

input_img = Input(shape=(256, 256, 3))  # adapt this if using `channels_first` image data format

x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img) # 96 x 96 x 32
x = MaxPooling2D((2, 2), padding='same')(x) # 32 x 32 x 32
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x) # 32 x 32 x 64
x = MaxPooling2D((2, 2), padding='same')(x) # 16 x 16 x 64
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x) # 16 x 16 x 128 (small)
encoded = MaxPooling2D((2, 2), padding='same')(x) # 8 x 8 x 128

# at this point the representation is (8, 8, 128) 

x = Conv2D(128, (3, 3), activation='relu', padding='same')(encoded) # 8 x 8 x 128
x = UpSampling2D((2, 2))(x) # 16 x 16 x 128
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x) # 16 x 16 x 64
x = UpSampling2D((2, 2))(x) # 32 x 32 x 64
# x = Conv2D(32, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x) # 96 x 96 x 64
decoded = Conv2D(3, (3, 3), activation='sigmoid', padding='same')(x) # 96 x 96 x 3

autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='mean_squared_error')

But when I try to fit the model :

autoencoder_train = autoencoder.fit(train_ds, train_ds,
                epochs=25,
                batch_size=32,
                shuffle=True,
                validation_data=(test_ds, test_ds))

I have that error : ValueError: y argument is not supported when using dataset as input.

I tried to load subset of images with the same method (the one with arrays), and there was no issue. So I have the feeling that I am missing something in the architecture of the tf.data.dataset object (like the shape, or something like that).

Please could you tell me :

  1. Why I have this error ?
  2. How can I fix this issue ? How can I load my images from directory and subdirectories without using the "array" method ?

Thank you very much !

PS this question is similar to this one : Tensorflow `y` argument is not supported when using dataset as input But 1) the unique answer has not been validated and 2) I am not sure that I understand it.

CodePudding user response:

Your architecture seems correct. All you have to change is the arguments to image_dataset_from_directory. The return of image_dataset_from_directory is a tuple consisting of the images and the labels. However, the fit function has an own argument where you can pass the labels.

To iterate over your data you could use:

for x, y in train_ds:
  print("image: {}, label: {}".format(x,y))

After having your x and y, all you have to do is pass it to the fit function:

autoencoder_train = autoencoder.fit(x, y,
                epochs=25,
                batch_size=32,
                shuffle=True,
                )

I can not try to code but I think that it should give you an insight of what to do.

CodePudding user response:

After spending hours talking to chatgpt, it gave me the way to solve this issue : create a new dataset that contains the input data as both the input and target data with a map function.

Here is the code :

#load datas
train_ds = tf.keras.utils.image_dataset_from_directory(
  'path/to/images',
    label_mode=None,
    color_mode = 'rgb',
    batch_size=32,
  image_size=(256,256))

#normalize images
def normalize_fn(image):
  image = image / 255
  return image

#combine the input so x and y are both the same images
normalized_train_ds = train_ds.map(normalize_fn)

train_ds_combined = normalized_train_ds.map(lambda x: (x, x))

Then do the same thing for test dataset.

And once you compiled your model :

autoencoder.fit(train_ds_combined, epochs=10, validation_data=(test_ds_combined))

It worked for me, but I sincerely hope that I would help another one stuck with this issue !

  • Related