Home > Blockchain >  I'm facing below issue whe I train the model using VGG16
I'm facing below issue whe I train the model using VGG16

Time:11-15

I am facing the following issue, when trying to fit my model:

ValueError: Input 0 of layer "model" is incompatible with the layer: expected shape=(None, 256, 96, 3), found shape=(None, 1, 8, 3, 512)

Details of my model below:

img_height = 96
img_width = 256

#Get back the convolutional part of a VGG network trained on ImageNet
model_vgg16_conv = VGG16(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3))
#Create your own input format (here 3x200x200)
input = Input(shape=(img_width, img_height, 3))

#Use the generated model 
output_vgg16_conv = model_vgg16_conv(input)

#Add the fully-connected layers 
x = Flatten(name='flatten')(output_vgg16_conv)
x = Dense(512, activation='relu', name='Dense1')(x)
x = Dropout(0.2, name = 'Dropout')(x)
x = Dense(45, activation='softmax', name='predictions')(x)

#Create your own model 
my_model = Model(inputs=input, outputs=x)

#In the summary, weights and layers from the VGG part will be hidden, but they will be fit during the training
my_model.summary()

my_model.compile(
    loss = 'sparse_categorical_crossentropy',
    optimizer = 'adam',
    metrics = ['accuracy']
)

my_model.fit(
    features,
    labels,
    batch_size = 5,
    epochs = 15,
    validation_split = 0.1,
    callbacks=[TensorBoard]
    )

Any suggestions to adjust my model to resolve the issue? Please note that features: X, label: y, total images: 4193 and 4 classes

My dataset generates code:

conv_base = VGG16(
            weights='imagenet',
            include_top=False,
            input_shape=(img_width, img_height, 3)
        )

image reshape

    for input_image in tqdm(os.listdir(dir)):
        try:

            img = image.load_img(os.path.join(dir, input_image), target_size=(img_width, img_height))
            img_tensor = image.img_to_array(img)
            img_tensor /= 255.

            pic = conv_base.predict(img_tensor.reshape(1, img_width, img_height, 3))
            data.append([pic, index])

        except Exception as e:
            pass

do I need to do any adjustments to this?

CodePudding user response:

You need to make sure that your inputs to your model are correct. I am using randomly generated data tf.random.normal((64, 256, 96, 3)), where 64 is the number of samples, 256 is your img_width, 96 is your img_height, and 3 is the number of channels. Also note that if you have 4 classes, your output layer should have 4 nodes.

import tensorflow as tf

img_height = 96
img_width = 256

#Get back the convolutional part of a VGG network trained on ImageNet
model_vgg16_conv = tf.keras.applications.VGG16(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3))
#Create your own input format (here 3x200x200)
input = tf.keras.layers.Input(shape=(img_width, img_height, 3))

#Use the generated model 
output_vgg16_conv = model_vgg16_conv(input)

#Add the fully-connected layers 
x = tf.keras.layers.Flatten(name='flatten')(output_vgg16_conv)
x = tf.keras.layers.Dense(512, activation='relu', name='Dense1')(x)
x = tf.keras.layers.Dropout(0.2, name = 'Dropout')(x)
x = tf.keras.layers.Dense(4, activation='softmax', name='predictions')(x)

#Create your own model 
my_model = tf.keras.Model(inputs=input, outputs=x)

#In the summary, weights and layers from the VGG part will be hidden, but they will be fit during the training
my_model.summary()

my_model.compile(
    loss = 'sparse_categorical_crossentropy',
    optimizer = 'adam',
    metrics = ['accuracy']
)

my_model.fit(
    tf.random.normal((64, 256, 96, 3)),
    tf.random.uniform((64, 1), maxval=4),
    batch_size = 5,
    epochs = 15)

To reshape your tensor with the shape (256, 96, 3) to (1, 256, 96, 3), try:

import tensorflow as tf

tensor = tf.random.normal((256, 96, 3))
tensor = tf.expand_dims(tensor, axis=0)
print(tensor.shape)
(1, 256, 96, 3)
  • Related