Home > Software engineering >  Input 0 incompatible with layer inception_v3 expected shape=(None, 299, 299, 3), found shape=(1, 229
Input 0 incompatible with layer inception_v3 expected shape=(None, 299, 299, 3), found shape=(1, 229

Time:11-23

How to convert shape=(1, 299, 299, 3) to expected shape=(None, 299, 299, 3) to feed into trained inception v3 model.

NOTE: THE SAME QUESTION HAVE BEEN ASKED BEFORE BUT THE ANSWER ISN'T CLEAR ENOUGH.

 img = tf.io.read_file(img)
  # decode image into tensor
img = tf.io.decode_image(img,channels=3) # hardcode for 3 channels to be compatible despite of the image type
  # resize the image
img = tf.image.resize(img,[229,229])
  # Scale Y/N
img = img/255.

img.shape

TensorShape([229, 229, 3])

instance = np.expand_dims(img, axis=0)

print(instance.shape)

(1, 229, 229, 3)

predictions = model(instance).numpy().argmax(axis=1)

An error occurred when trying to get a prediction as follows. ValueError: Input 0 is incompatible with layer inception_v3: expected shape=(None, 299, 299, 3), found shape=(None, 229, 229, 3) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () ----> 1 predictions = model(instance).numpy().argmax(axis=1)

1 frames
/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
    267                              ' is incompatible with layer '   layer_name  
    268                              ': expected shape='   str(spec.shape)  
--> 269                              ', found shape='   display_shape(x.shape))
    270 
    271 

ValueError: Input 0 is incompatible with layer inception_v3: expected shape=(None, 299, 299, 3), found shape=(1, 229, 229, 3)

According to answer 01 I changed the code as follows and encounter some peculiar errors.

img = tf.expand_dims(img,axis=0)
img = tf.keras.applications.inception_v3.preprocess_input(img)
predictions = model.predict(img).argmax(axis=1)

The new error is as follows

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-130-1d94d955d14c> in <module>()
----> 1 predictions = model.predict(img).argmax(axis=1)

9 frames
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
   1749           for step in data_handler.steps():
   1750             callbacks.on_predict_batch_begin(step)
-> 1751             tmp_batch_outputs = self.predict_function(iterator)
   1752             if data_handler.should_sync:
   1753               context.async_wait()

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
    883 
    884       with OptionalXlaContext(self._jit_compile):
--> 885         result = self._call(*args, **kwds)
    886 
    887       new_tracing_count = self.experimental_get_tracing_count()

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
    931       # This is the first call of __call__, so we have to initialize.
    932       initializers = []
--> 933       self._initialize(args, kwds, add_initializers_to=initializers)
    934     finally:
    935       # At this point we know that the initialization is complete (or less

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
    758     self._concrete_stateful_fn = (
    759         self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
--> 760             *args, **kwds))
    761 
    762     def invalid_creator_scope(*unused_args, **unused_kwds):

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
   3064       args, kwargs = None, None
   3065     with self._lock:
-> 3066       graph_function, _ = self._maybe_define_function(args, kwargs)
   3067     return graph_function
   3068 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
   3461 
   3462           self._function_cache.missed.add(call_context_key)
-> 3463           graph_function = self._create_graph_function(args, kwargs)
   3464           self._function_cache.primary[cache_key] = graph_function
   3465 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   3306             arg_names=arg_names,
   3307             override_flat_arg_shapes=override_flat_arg_shapes,
-> 3308             capture_by_value=self._capture_by_value),
   3309         self._function_attributes,
   3310         function_spec=self.function_spec,

/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses)
   1005         _, original_func = tf_decorator.unwrap(python_func)
   1006 
-> 1007       func_outputs = python_func(*func_args, **func_kwargs)
   1008 
   1009       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
    666         # the function a weak reference to itself to avoid a reference cycle.
    667         with OptionalXlaContext(compile_with_xla):
--> 668           out = weak_wrapped_fn().__wrapped__(*args, **kwds)
    669         return out
    670 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
    992           except Exception as e:  # pylint:disable=broad-except
    993             if hasattr(e, "ag_error_metadata"):
--> 994               raise e.ag_error_metadata.to_exception(e)
    995             else:
    996               raise

ValueError: in user code:

    /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1586 predict_function  *
        return step_function(self, iterator)
    /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1576 step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1286 run
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2849 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3632 _call_for_each_replica
        return fn(*args, **kwargs)
    /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1569 run_step  **
        outputs = model.predict_step(data)
    /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1537 predict_step
        return self(x, training=False)
    /usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py:1020 __call__
        input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
    /usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py:269 assert_input_compatibility
        ', found shape='   display_shape(x.shape))

    ValueError: Input 0 is incompatible with layer inception_v3: expected shape=(None, 299, 299, 3), found shape=(None, 229, 229, 3)

CodePudding user response:

You should use tf.expand_dims to add a new dimension to your existing tensor. And calling inception_v3.preprocess_input will scale input pixels between -1 and 1 for you:

import tensorflow as tf

model = tf.keras.applications.InceptionV3()

image = tf.random.normal((299, 299, 3))
image = tf.expand_dims(image, axis=0)
image = tf.keras.applications.inception_v3.preprocess_input(image)
preds = model.predict(image)
print(preds.argmax(axis=1))
[111]

You are resizing your image with the wrong dimensions. Instead of [229,229] you need [299,299]. Try this:

enter image description here

import tensorflow as tf
import numpy as np

model = tf.keras.applications.inception_v3.InceptionV3(weights="imagenet")
x = tf.io.read_file('dog.jpeg')
x = tf.io.decode_image(x,channels=3) 
x = tf.image.resize(x,[299,299])
x = tf.expand_dims(x, axis=0)
x = tf.keras.applications.inception_v3.preprocess_input(x)
preds = model.predict(x)
print(preds.argmax(axis=1))
[158]
  • Related