I am trying to add preprocessing layers to a tensorflow model but can't figure out how to use tf.keras.layers.Reshape correctly.
I have some previous preprocessing layers that bring a numpy image to a tensorflow tensor of shape TensorShape([50, 50, 3])
.
The first layer of the model that I am trying to connect these preprocessing layers to is a convolutional layer which requires a four dimensional input. When I call the model on a tensor of dimensions Tensorshape([50,50,3])
, with three dimensions, I get the error:
Input 0 of layer "Conv1" is incompatible with the layer: expected min_ndim=4, found ndim=3.
Full shape received: (50, 50, 3)
Call arguments received by layer "model_1" (type Functional):
• inputs=tf.Tensor(shape=(50, 50, 3), dtype=float32)
• training=False
• mask=None
Changing the input tensor to a numpy array and sizing it to a (1,50,50,3) array, then inputting that into the model works fine. I want to do this with a tensorflow layer however so that I can save the preprocessing and model inference into a single Saved Model format tensorflow file without having to do python preprocessing.
tf.expand_dims(input_tensor, axis=0)
works, but it's not a layer object so I can't use it. It looks like tf.keras.layers.Reshape((1,50,50,3), input_shape=(50,50,3))
or tf.keras.layers.Reshape((1,50,50,3))
is the way to go then.
My silly problem is that I just can't figure out how to use tf.keras.layers.Reshape
, even using the tf/keras documentation.
When I pass in a tensor of dimensions TensorShape([50, 50, 3])
, to tf.keras.layers.Reshape((1,50,50,3))
I get the error message:
Input to reshape is a tensor with 7500 values, but the requested shape has 375000 [Op:Reshape]
Call arguments received by layer "reshape_9" (type Reshape):
• inputs=tf.Tensor(shape=(50, 50, 3), dtype=float32)
So, how do I get this to work? All I want is to end up with the same Tensor as inputted but with TensorShape([1,50,50,3])
instead of TensorShape([50,50,3])
, and I want this to happen with a tf.keras.layers object. So really I just want to perform an identity transformation that adds a fourth dimension of size one at the beginning of a tensor so that a convolutional layer can consume it, and I want this from a tf.keras.layers object.
CodePudding user response:
You need to make sure to add an extra dimension for the batch size, if you are passing in a single image the batch size would be 1. You can use np.expand_dims
to add the extra dimension.
CodePudding user response:
It's because the images are put together into batches. Try
tf.keras.layers.Reshape((-1,50,50,3))
Or you can use a Lambda layer instead:
tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1))
The value -1 means the reshape
or expand_dims
function will calculate that shape dimension for you, depending on the number of values it receives.