Home > Software engineering >  Keras Shape errors when trying to use pre-trained model
Keras Shape errors when trying to use pre-trained model

Time:09-03

I want to use a pre-trained model (from Keras Applications), with weights, and append my (very simple) CNN model at the end. To this end I am trying to loosely follow the tutorial here under the sub-header 'Fine-tune InceptionV3 on a new set of classes'.

My original simple CNN model was this:

    model = Sequential()
    model.add(Rescaling(1.0 / 255))
    model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(256,256,3)))
    model.add(MaxPool2D(pool_size=(2, 2), strides=2))
    model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
    model.add(MaxPool2D(pool_size=(2, 2), strides=2))
    model.add(Flatten())
    model.add(Dense(units=5, activation='softmax'))

As I'm following the tutorial, I've converted it as so:

    x = base_model.output
    x = Rescaling(1.0 / 255)(x)
    x = Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(256,256,3))(x)
    x = MaxPool2D(pool_size=(2, 2), strides=2)(x)
    x = Conv2D(64, kernel_size=(3, 3), activation='relu')(x)
    x = MaxPool2D(pool_size=(2, 2), strides=2)(x)
    x = GlobalAveragePooling2D()(x)
    predictions = Dense(units=5, activation='softmax')(x)

As you can see, the difference is that the top model is a Sequential() model while the bottom is Functional (I think?), and also, that the Flatten() layer has been replaced with GlobalAveragePooling2D(). I did this because I kept getting shape-related errors and it wasn't compiling. I thought I got it once I replaced the Flatten() layer with the GlobalAveragePooling() as this part of the code finally did compile, however now that I'm trying to train the model, it's giving me the following error:

ValueError: Exception encountered when calling layer "max_pooling2d_7" (type MaxPooling2D).

Negative dimension size caused by subtracting 2 from 1 for '{{node model/max_pooling2d_7/MaxPool}} = MaxPool[T=DT_FLOAT, data_format="NHWC", explicit_paddings=[], ksize=[1, 2, 2, 1], padding="VALID", strides=[1, 2, 2, 1]](model/conv2d_10/Relu)' with input shapes: [?,1,1,64].

Call arguments received:
  • inputs=tf.Tensor(shape=(None, 1, 1, 64), dtype=float32)

I don't want to remove the MaxPooling layer as I want this fine-tuned model append to be as close to the 'simple CNN' model I originally had, so that I can compare the two results. But I keep getting hit with these shape errors, which I don't really understand, and it's coming to the end of the day.

Is there a nice quick-fix that can enable this VGG16 simple CNN to work?

CodePudding user response:

the first most important technical problem in your model structure is that you are rescaling images after passed through the base_model, so you should implement it just before the base model

the second one is that you have defined input_shape in the model above in convolution layer while data first pass throught base model, so you should define input layer before base model and then pass its output thorough base_model and the other layers

here i've edited your code:

inputs = Input(shape = (input_shape=(256,256,3))
x = Rescaling(1.0 / 255)(inputs)
x = base_model(x)
x = Conv2D(32, kernel_size=(3, 3), activation='relu')(x)
x = MaxPool2D(pool_size=(2, 2), strides=2)(x)
x = Conv2D(64, kernel_size=(3, 3), activation='relu')(x)
x = MaxPool2D(pool_size=(2, 2), strides=2)(x)
x = GlobalAveragePooling2D()(x)
predictions = Dense(units=5, activation='softmax')(x)

model = keras.Model(inputs = [inputs], outputs = [predictions])

And for the error raised, in this case you could set convolution layers padding parameter to 'same' or even resize images to larger size to override the problem.

  • Related