Home > Software design >  How do I resolve incompatibility with Layers in Deep Learning
How do I resolve incompatibility with Layers in Deep Learning

Time:12-31

I am new to Deep Learning, and trying to use MNIST dataset to learn. But I am having a compatibility issue while running the code below. The aim of this work is to use only densely connected layers all through

# Importing 
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Dense, Dropout
from keras.models import Sequential
from tensorflow.keras.optimizers import RMSprop


#Loading and spliting the dataset to test and validation set
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

# preprocessing
train_images = train_images/ 255.0
test_images = test_images/ 255.0

# specify the input shape and number of class
INPUTSHAPE = (60000,28*28)
NUM_CLASSES =10

# model architecture
model1 = Sequential()
model1.add(Dense(500, input_shape=INPUTSHAPE, activation='relu'))  
model1.add(Dense(150, activation='relu'))  
model1.add(Dense(50, activation='relu'))
model1.add(Dense(NUM_CLASSES, activation='softmax'))

# specifying the training configuration (optimizer, loss, metrics)
model1.compile(loss='sparse_categorical_crossentropy',
              optimizer=RMSprop(learning_rate=1e-4),metrics=['acc'])

history1 = model1.fit(train_images,train_labels, epochs=30, batch_size=64, 
                    validation_data=(test_images,test_labels))

Below is the error I got after running the code

Epoch 1/30
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [152], in <cell line: 2>()
      1 # fitting the model
----> 2 history1 = model1.fit(train_images,train_labels, epochs=30, batch_size=64, 
      3                     validation_data=(test_images,test_labels))

File ~\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
     67     filtered_tb = _process_traceback_frames(e.__traceback__)
     68     # To get the full stack trace, call:
     69     # `tf.debugging.disable_traceback_filtering()`
---> 70     raise e.with_traceback(filtered_tb) from None
     71 finally:
     72     del filtered_tb

File C:\Users\OLUWAS~1.OLA\AppData\Local\Temp\__autograph_generated_filez3asmnt2.py:15, in outer_factory.<locals>.inner_factory.<locals>.tf__train_function(iterator)
     13 try:
     14     do_return = True
---> 15     retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
     16 except:
     17     do_return = False

ValueError: in user code:

    File "C:\Users\oluwasegun.olaniyan\Anaconda3\lib\site-packages\keras\engine\training.py", line 1249, in train_function  *
        return step_function(self, iterator)
    File "C:\Users\oluwasegun.olaniyan\Anaconda3\lib\site-packages\keras\engine\training.py", line 1233, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "C:\Users\oluwasegun.olaniyan\Anaconda3\lib\site-packages\keras\engine\training.py", line 1222, in run_step  **
        outputs = model.train_step(data)
    File "C:\Users\oluwasegun.olaniyan\Anaconda3\lib\site-packages\keras\engine\training.py", line 1023, in train_step
        y_pred = self(x, training=True)
    File "C:\Users\oluwasegun.olaniyan\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
        raise e.with_traceback(filtered_tb) from None
    File "C:\Users\oluwasegun.olaniyan\Anaconda3\lib\site-packages\keras\engine\input_spec.py", line 295, in assert_input_compatibility
        raise ValueError(

    ValueError: Input 0 of layer "sequential_15" is incompatible with the layer: expected shape=(None, 60000, 784), found shape=(None, 28, 28)


How do I resolve this without using flatten as My aim is to use Densely connected layers only.

CodePudding user response:

It looks like you are encountering a ValueError when trying to fit the model. This error is often caused by a mismatch between the shapes of the input data and the input shape specified in the model.

In this case, the issue seems to be with the input_shape argument of the first Dense layer. The shape of the input data is (60000, 28, 28), but the input_shape argument is set to (60000, 2828). This means that the model is expecting a single 2D tensor of shape (60000, 2828) as input, but the input data is actually a 3D tensor with shape (60000, 28, 28).

To fix this issue, you can modify the input_shape argument to match the shape of the input data. For example, you could change it to (28, 28) like this:

model1.add(Dense(500, input_shape=(28, 28), activation='relu'))

CodePudding user response:

you should:

model1.add(Dense(500, input_shape=INPUTSHAPE, activation='relu'))  

into

model1.add(Dense(500, input_shape=(28*28,), activation='relu'))  

also, I guess you should change flatten input from:

train_images = train_images/ 255.0
test_images = test_images/ 255.0

to

train_images = train_images.reshape((-1, 28*28)) / 255.0
test_images = test_images.reshape((-1, 28*28)) / 255.0
  • Related