Home > Blockchain >  How to use custom image sizes to train the segmentation models in Keras?
How to use custom image sizes to train the segmentation models in Keras?

Time:12-07

I am using the Qubvel segmentation models https://github.com/qubvel/segmentation_models repository to train an Inception-V3-encoder based model for a binary segmentation task. I am using (256 width x 256 height) images to train the models and they are working good. If I double one of the dimensions, say for example, (256 width x 512 height), it works fine as well. However, when I make adjustments for the aspect ratio and resize the images to a custom dimension, say (272 width x 256 height), the model throws an error as follows:

ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concatenation axis. Received: input_shape=[(None, 16, 18, 2048), (None, 16, 17, 768)]

Is there a way to use such custom dimensions to train these models?

CodePudding user response:

Your value error says that you are trying concatenate batch of inputs with varying dimensions.

This might be due to your dynamic aspect ratio based resizing of images. Say for example one batch of images might have shape (None, 16, 18, 2048) while another batch may have shape (None, 16, 17, 768).

Concatenate operation requires inputs with matching shapes except for the concatenation axis.

A compatible concatenation will have inputs like (3, 256, 512, 3) and (15, 256, 512, 3) if we are trying to concat on axis=0 which is the concatenation axis. Notice how the shapes are matching except in the concatenation axis. Output will be of shape (18, 256, 512, 3).

Clearly with your input shapes its not possible with any axis. Keep your height and width fixed while training and if any image doesn't fit the size then resize it before passing it for training. This resizing can be done as part of preprocessing before training operation.

  • Related