Home > Net >  How to use transfer learning from TensorHub with custom image sizes when using ImageDataGenerator an
How to use transfer learning from TensorHub with custom image sizes when using ImageDataGenerator an

Time:06-12

I am trying to learn how to perform feature extraction from a pre-trained model for a transfer learning task. I am currently trying to use MobileNet v2 Feature extractor from tensorhub, Although the original image shapes are a tuple of (224, 224) and my images are 384x288x3. What I tried doing was:

import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras import layers


IMG_SHAPE = (384, 288)
BATCH_SIZE = 32

train_dir = '/content/drive/MyDrive/dataset_split/Training'
test_dir = '/content/drive/MyDrive/dataset_split/Test'


train_datagen = ImageDataGenerator(rescale=1/255.)
test_datagen = ImageDataGenerator(rescale=1/255.)


training_dataset = train_datagen.flow_from_directory(train_dir, target_size=IMG_SHAPE,
                                                     batch_size=BATCH_SIZE, class_mode='categorical')


print("Testing Images: ")
test_data = test_datagen.flow_from_directory(test_dir, target_size=IMG_SHAPE,
                                             batch_size=BATCH_SIZE, class_mode='categorical')
    
mobilenet_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"



def create_model(model_url, num_classes=2):
  feature_extractor_layer = hub.KerasLayer(model_url, trainable=False, name="feature_extractor_layer", input_shape=IMG_SHAPE)
  model = tf.keras.Sequential([feature_extractor_layer, layers.Dense(num_classes, activation="softmax", name="output_layer")])
  return model
        
mobilenet_model = create_model(mobilenet_url, num_classes=2)



mobilenet_model.compile(loss='categorical_crossentropy',
                             optimizer=tf.keras.optimizers.Adam(),
                             metrics=['accuracy'])


history = mobilenet_model.fit(training_dataset, epochs=5, steps_per_epoch=len(training_dataset), validation_data=test_data,
                                          validation_steps=len(test_data),
                                          callbacks=[create_tensorboard_callback(dir_name="tensorflow_hub", 
                                                                                 experiment_name="MobileNet_v2")]) 

I am getting the error at the following line:

mobilenet_model = create_model(mobilenet_url, num_classes=2)

The error stacktrace is the following:

ValueError: Exception encountered when calling layer "feature_extractor_layer" (type KerasLayer).

in user code:

    File "/usr/local/lib/python3.7/dist-packages/tensorflow_hub/keras_layer.py", line 237, in call  *
        result = smart_cond.smart_cond(training,

    ValueError: Could not find matching concrete function to call loaded from the SavedModel. Got:
      Positional arguments (4 total):
        * Tensor("inputs:0", shape=(None, 224, 224), dtype=float32)
        * False
        * False
        * 0.99
      Keyword arguments: {}
    
     Expected these arguments to match one of the following 4 option(s):
    
    Option 1:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
        * True
        * False
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}
    
    Option 2:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
        * True
        * True
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}
    
    Option 3:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
        * False
        * True
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}
    
    Option 4:
      Positional arguments (4 total):
        * TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='inputs')
        * False
        * False
        * TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
      Keyword arguments: {}


Call arguments received:
  • inputs=tf.Tensor(shape=(None, 224, 224), dtype=float32)
  • training=None

I'd like to know how can I use my own image shape for the feature extraction? And if it isn't possible how can I adequately input that images of those sizes for the feature extractor

CodePudding user response:

You need reshape IMG_SHAPE = (384, 288) to (224,224) as your input of mobilenet_v2. One of the methods for reshaping is adding Lambda layer with tf.image.resize to your model:

def create_model(model_url, num_classes=2):
    inp = tf.keras.layers.Input((384, 288,3))
    resize_img = tf.keras.layers.Lambda(lambda image: tf.image.resize(image, (224,224)))

    feature_extractor_layer = hub.KerasLayer(model_url, trainable=False, 
                                             name="feature_extractor_layer", 
                                             input_shape=(224,224,3))

    model = tf.keras.Sequential([
                                 inp,
                                 resize_img,
                                 feature_extractor_layer, 
                                 tf.keras.layers.Dense(num_classes, 
                                                       activation="softmax", 
                                                       name="output_layer")
                                 ])
    return model

Example Code: (You can read another example here):

import numpy
from PIL import Image
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras.preprocessing.image import ImageDataGenerator

for loc, rep in zip(['training', 'test'], [20,10]):
    for idx, c in enumerate([f'c/{loc}/1/', f'c/{loc}/2/']*rep):
        array = numpy.random.rand(384,288,3) * 255
        img = Image.fromarray(array.astype('uint8')).convert('RGB')
        img.save('{}img_{}.png'.format(c, idx))

IMG_SHAPE = (384, 288)
BATCH_SIZE = 32

train_dir = 'c/training'
test_dir = 'c/test'


train_datagen = ImageDataGenerator(rescale=1/255.)
test_datagen = ImageDataGenerator(rescale=1/255.)


training_dataset = train_datagen.flow_from_directory(train_dir, target_size=IMG_SHAPE,
                                                     batch_size=BATCH_SIZE, class_mode='categorical')


test_dataset = test_datagen.flow_from_directory(test_dir, target_size=IMG_SHAPE,
                                             batch_size=BATCH_SIZE, class_mode='categorical')
    
mobilenet_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"



def create_model(model_url, num_classes=2):
    inp = tf.keras.layers.Input((384, 288,3))
    resize_img = tf.keras.layers.Lambda(lambda image: tf.image.resize(image, (224,224)))

    feature_extractor_layer = hub.KerasLayer(model_url, trainable=False, 
                                             name="feature_extractor_layer", 
                                             input_shape=(224,224,3))

    model = tf.keras.Sequential([
                                 inp,
                                 resize_img,
                                 feature_extractor_layer, 
                                 tf.keras.layers.Dense(num_classes, 
                                                       activation="softmax", 
                                                       name="output_layer")
                                 ])
    return model
        

mobilenet_model = create_model(mobilenet_url, num_classes=3)
mobilenet_model.compile(loss='categorical_crossentropy',optimizer=tf.keras.optimizers.Adam(),metrics=['accuracy'])
history = mobilenet_model.fit(training_dataset, epochs=5, steps_per_epoch=len(training_dataset),
                              validation_data=test_dataset,validation_steps=len(test_dataset))

Output:

Found 40 images belonging to 3 classes.
Found 20 images belonging to 3 classes.
Epoch 1/5
2/2 [==============================] - 18s 7s/step - loss: 0.9844 - accuracy: 0.5000 - val_loss: 0.8181 - val_accuracy: 0.5500
Epoch 2/5
2/2 [==============================] - 5s 4s/step - loss: 0.7603 - accuracy: 0.5250 - val_loss: 0.7505 - val_accuracy: 0.4500
Epoch 3/5
2/2 [==============================] - 4s 2s/step - loss: 0.7311 - accuracy: 0.4750 - val_loss: 0.7383 - val_accuracy: 0.4500
Epoch 4/5
2/2 [==============================] - 2s 1s/step - loss: 0.7099 - accuracy: 0.5250 - val_loss: 0.7220 - val_accuracy: 0.4500
Epoch 5/5
2/2 [==============================] - 2s 1s/step - loss: 0.6894 - accuracy: 0.5000 - val_loss: 0.7162 - val_accuracy: 0.5000
  • Related