Home > Enterprise >  How to fix InvalidArgumentError: logits and labels must be broadcastable: logits_size=[32,198] label
How to fix InvalidArgumentError: logits and labels must be broadcastable: logits_size=[32,198] label

Time:10-28

I am actually trying to do surface defect detection for the images (checking for defects on the walls like cracks…) when I try to fit the model it throws an error logits and labels must be broadcastable: logits_size=[32,198] labels_size=[32,3]

I tried a few ways but nothing worked. How do I overcome the error or is there something wrong with the approach I chose? The data I am working with is unlabelled image data (all the images are in a single folder )

from keras.preprocessing.image import ImageDataGenerator

train_model = ImageDataGenerator(rescale = 1./255,
                                   shear_range = 0.2,
                                   zoom_range = 0.2,
                                   horizontal_flip = True)

test_model = ImageDataGenerator(rescale = 1./255)

training_data = train_model.flow_from_directory('/Users/nm2/Public/ai-dataset-training-100/5/23_463_DISTACCO_DEL_COPRIFERRO_Q100_training_dataset',
                                                 target_size = (224, 224),
                                                 batch_size = 32,
                                                 class_mode = 'categorical')

testing_data = test_model.flow_from_directory('/Users/nm2/Public/ai-dataset-training-100/5/23_463_DISTACCO_DEL_COPRIFERRO_Q100_training_dataset',
                                            target_size = (224, 224),
                                            batch_size = 32,
                                            class_mode = 'categorical')

IMAGE_SIZE = [224, 224]

#Import the Vgg 16 and add the preprocessing layer to front of the VGG16 Here we will use ImageNet  PreTrained Weights

vgg_model = VGG16(input_shape=IMAGE_SIZE   [3], weights='imagenet', include_top=False)


for layer in vgg_model.layers:
    layer.trainable = False

x = Flatten()(vgg_model.output)

#We use glob function to find out how many files are there in the working directory and count the number of classes they belong to.

folder_count = glob('/Users/nm2/Public/ai-dataset-training-`100/5/23_493_PANORAMICA_LIVELLO_BASE_ISPEZIONE_Q100_training_dataset/*')`

prediction = Dense(len(folder_count), activation='softmax')(x)

#Create a Model 
model = Model(inputs=vgg_model.input, outputs=prediction)

model.summary()

model.compile(
  loss='categorical_crossentropy',
  optimizer='adam',
  metrics=['accuracy']
)


post_run = model.fit(training_data,
  validation_data=testing_data,
  epochs=10,
  steps_per_epoch=len(training_data),
  validation_steps=len(testing_data))


InvalidArgumentError:  logits and labels must be broadcastable: logits_size=[32,198] labels_size=[32,3]
     [[node categorical_crossentropy/softmax_cross_entropy_with_logits (defined at var/folders/3b/tfwxbsyd41j64kbrjghzrvcm0000gq/T/ipykernel_1068/3441923959.py:5) ]] [Op:__inference_train_function_1205]

Function call stack:
train_function

CodePudding user response:

you have this code as your model top layer

prediction = Dense(len(folder_count), activation='softmax')(x)

the number of neurons in this layer should be the same as the number of classes you have. Also in model.fit you have

steps_per_epoch=len(training_data), validation_steps=len(testing_data))

this should be

batch_size=32
steps_per_epoch=len(training_data)/batch_size
validation_steps=len(testing_data)/batch_size

or alternatively do not specify these values and model.fit will determine the right values internally. Also you have code

vgg_model = VGG16(input_shape=IMAGE_SIZE   [3]

change this to

vgg_model = VGG16(input_shape=[224,224,3]

CodePudding user response:

Here is the full code that should work for you.

import tensorflow as tf
import tensorflow as tf
from tensorflow import keras
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import categorical_crossentropy
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Activation,Dropout

data_dir=r'C:\Temp\DATA' # directory where the image files are stored change to your directory
vsplit=.2 #percentage of data to be used for validation
IMAGE_SIZE = [224, 224]
IMAGE_SHAPE=[224,224,3]

train_model = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2,
                                   horizontal_flip = True, validation_split=vsplit)
test_model = ImageDataGenerator(rescale = 1./255, validation_split=vsplit)
training_data = train_model.flow_from_directory(data_dir, target_size = IMAGE_SIZE,batch_size = 32,
                                                 class_mode = 'categorical',  subset='training',
                                                 shuffle = True, seed=123)
testing_data =  test_model.flow_from_directory(data_dir, target_size = IMAGE_SIZE, batch_size = 32,
                                                class_mode = 'categorical', subset='validation',
                                                shuffle=True, seed=123)
class_dict=training_data.class_indices
classes=list(class_dict.keys())
print ('LIST OF CLASSES ', classes)
print ('CLASS DICTIONARY ',class_dict)
number_of_classes=len(classes) # this is the number of neurons in your top layer of the model
print ('Number of classes = ', number_of_classes)

base_model=tf.keras.applications.VGG19(include_top=False, weights="imagenet",input_shape=IMAGE_SHAPE, pooling='max') 
# Note setting pooling='max' eliminates the need for a flatten layer
# I do not recommend using VGG it is a very large model I recommend using EfficientNetB3- note do not rescale
# the pixel for Efficient net. 
x=base_model.output
base_model.trainable=False
x = Dense(256, activation='relu')(x)
x=Dropout(rate=.45, seed=123)(x)        
output=Dense(number_of_classes, activation='softmax')(x)
model=Model(inputs=base_model.input, outputs=output)
model.compile(Adam(learning_rate=.001), loss='categorical_crossentropy', metrics=['accuracy']) 

epochs=5
# I recommend the use of callbacks to control the learning rate and early stopping
rlronp=tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5,  patience=1, verbose=1)
estop=tf.keras.callbacks.EarlyStopping( monitor="val_loss", patience=3, verbose=1, restore_best_weights=True)
history=model.fit(x=training_data,  epochs=epochs, verbose=1,  validation_data=testing_data,
                  callbacks=[rlronp, estop], validation_steps=None,  shuffle=True,  initial_epoch=0)

# Fine Tune the model
base_model.trainable=True # make the base model trainable
epochs=5
history=model.fit(x=training_data,  epochs=epochs, verbose=1,  validation_data=testing_data,
                  callbacks=[rlronp, estop], validation_steps=None,  shuffle=True,  initial_epoch=0)
  • Related