Home > front end >  Is there a way to upscale an image using a layer in a machine learning model?
Is there a way to upscale an image using a layer in a machine learning model?

Time:03-02

For instance, consider I have a (32,32,1) input grayscale image. I want to use EfficientNet or any other pre-trained model to classify the data (or basically use transfer learning of any sort). Since EfficientNet takes a minimum of (75,75,3) sized images, is it possible for to upscale my image using ONLY model weights?

For example, any combination of Conv2D, Dense etc which can work for my use case.

CodePudding user response:

  1. You can use tf.keras.layers.Resizing which resizes an image input to a target height and width after the input layer inside your DL model. check the doc for more details.
    or
  2. If you read the images data from a folder you don't need to add a new layer to your model. You can use tf.keras.utils.image_dataset_from_directory method and specify image_size (it's just an argument which is the Size to resize images to after they are read from disk) as your desired target size

CodePudding user response:

Conv2D would only decrease the size of the image.

You could use a 'deconvolution' layer: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2DTranspose which is a trainable layer ; for example strides=(3,3) multiplies by 3 the width and the height of the image.

An example of use is given in https://www.tensorflow.org/tutorials/generative/dcgan

  • Related