Home > Enterprise >  ValueError: Exception encountered when calling layer "max_pooling2d_26" (type MaxPooling2D
ValueError: Exception encountered when calling layer "max_pooling2d_26" (type MaxPooling2D

Time:11-21

I have the following code while building a CNN model with Keras. I have added three convolution layers and three pool layers. While compiling the model a value error arises from the pooling layer. I have added the code and error. please help

model = Sequential()
model.add(Conv2D(filters = 32, kernel_size = (4,4), input_shape = (28,28,1), activation = 'relu'))
model.add(MaxPool2D(pool_size = (2,2)))


model.add(Conv2D(filters = 64, kernel_size = (4,4), input_shape = (28,28,1), activation = 'relu'))
model.add(MaxPool2D(pool_size=(2, 2)))


model.add(Conv2D(filters = 64, kernel_size = (4,4), input_shape = (28,28,1), activation = 'relu'))
model.add(MaxPool2D(pool_size = (2,2)))

model.add(Dense(128, activation = 'relu'))
model.add(Dropout(0.5))

model.add(Dense(10, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = 'rmsprop', metrics = ['accuracy'])



### Error

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-27-52904ec71757> in <module>
     13 model.add(Conv2D(filters = 64, kernel_size = (4,4), input_shape = (28,28,1), activation = 'relu'))
     14 # pool layer
---> 15 model.add(MaxPool2D(pool_size = (2,2)))
     16 
     17 # Dense layer

~\Anaconda3\lib\site-packages\tensorflow\python\training\tracking\base.py in _method_wrapper(self, *args, **kwargs)
    528     self._self_setattr_tracking = False  # pylint: disable=protected-access
    529     try:
--> 530       result = method(self, *args, **kwargs)
    531     finally:
    532       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

~\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs)
     65     except Exception as e:  # pylint: disable=broad-except
     66       filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67       raise e.with_traceback(filtered_tb) from None
     68     finally:
     69       del filtered_tb

~\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def)
   1937   except errors.InvalidArgumentError as e:
   1938     # Convert to ValueError for backwards compatibility.
-> 1939     raise ValueError(e.message)
   1940 
   1941   return c_op

ValueError: Exception encountered when calling layer "max_pooling2d_29" (type MaxPooling2D).

Negative dimension size caused by subtracting 2 from 1 for '{{node max_pooling2d_29/MaxPool}} = MaxPool[T=DT_FLOAT, data_format="NHWC", explicit_paddings=[], ksize=[1, 2, 2, 1], padding="VALID", strides=[1, 2, 2, 1]](Placeholder)' with input shapes: [?,1,1,64].

Call arguments received:
  • inputs=tf.Tensor(shape=(None, 1, 1, 64), dtype=float32)

CodePudding user response:

Like @whereismywall said, the input to the maxpooling layer is (1, 1, 64) which is too small to use your (2, 2) pool size on. The short answer would be to add padding='SAME' argument for both the conv and max pool layers.

Looking at your code and your prediction layer, I've assumed you want to preserve the height and width of your feature volume, padding='SAME' would to this. This webpage explains it more detail.

Side note.
You don't have to redefine the input_shape for the remaining layers as you are using a sequential model. The shape of the feature volume would be determined by the previous layer. So as long as the first layer has the input shape defined, you don't need to specify the input shape for the remaining layers.

You also need a flatten layer before your first dense layer to convert the volume into a 1D vector.

CodePudding user response:

The input to the third MaxPool layer is of size (1,1,64) on which you cannot run a pool of 2x2. You need to check the input dimensions for each layer. Sample:

model = Sequential()
model.add(Conv2D(filters = 32, kernel_size = (4,4), input_shape = (28,28,1), activation = 'relu'))
model.add(MaxPool2D(pool_size = (2,2)))


model.add(Conv2D(filters = 64, kernel_size = (4,4), input_shape = (28,28,1), activation = 'relu'))
model.add(MaxPool2D(pool_size=(2, 2)))


model.add(Conv2D(filters = 64, kernel_size = (4,4) ,activation = 'relu'))

model.summary()

Last layer output is:

conv2d_24 (Conv2D)           (None, 1, 1, 64)          65600 
  • Related