Home > Software engineering >  Why do I get this error when trying to fit a tensorflow model: list index out of range
Why do I get this error when trying to fit a tensorflow model: list index out of range

Time:12-20

I've been trying to create a face classification program using the data pipeline on tensorflow. The images I am using are all colored and there are only two classes "me" and "not_me"; I am trying to make a personal face classifier for my face! Here is the code I have used:

I start off by loading the dataset and splitting it into train and validation.

images_ds = tf.data.Dataset.list_files(dir   '\\*.jpg', shuffle=True)
image_count = len(os.listdir())
train_count = int(image_count*0.8)
train_ds = images_ds.take(train_count)
val_ds = images_ds.skip(train_count)

These first two functions are used to get the labels and decode the images from my validation and training sets.

def get_labels(path):
    return tf.strings.split(path, '\\')[-2]
def decode_img(path):
    label = get_labels(path)
    img = tf.io.read_file(path)
    img = tf.io.decode_png(img, dtype=tf.uint8)
    img = tf.image.resize(img, [254, 254])
    return img, label

Then I make a training tuple with two lists, one for the labels and one for the image itself. I feel as if this is where I go wrong as the documentation asks for the data to be a tuble of (x, y) and not ([x], [y]) for fitting but I am not so sure.

train_tensor = ([], [])
for img, label in train_ds.map(decode_img):
    train_tensor[0].append(img)
    train_tensor[1].append(label)

Then I do this for the validation set!

val_tensor = ([], [])
for img, label in val_ds.map(decode_img):
    val_tensor[0].append(img)
    val_tensor[1].append(label)

Now it's all model work. I am using ResNet50 and doing a bit of transfer learning.

model_base = tf.keras.applications.resnet50.ResNet50(include_top=False, weights='imagenet', input_shape=(254, 254, 3))
for layers in model_base.layers:
    layers.trainable = False


global_avg_pooling = keras.layers.GlobalAveragePooling2D()(model_base.output)
output = keras.layers.Dense(2, activation='sigmoid')(global_avg_pooling)

face_classifier = keras.models.Model(inputs=model_base.input,
                                    outputs=output,
                                    name='ResNet50')


face_classifier.compile(loss='categorical_crossentropy',
                       optimizer=Adam(learning_rate=0.01),
                       metrics=['accuracy'])


epochs = 50
history = face_classifier.fit(train_tensor,
                             epochs=epochs,
                             validation_data=val_tensor)

I get this error when trying to compile:

IndexError                                Traceback (most recent call last)
C:\Users\FLEXSU~1\AppData\Local\Temp/ipykernel_13852/468442092.py in <module>
      1 epochs = 50
----> 2 history = face_classifier.fit(train_tensor,
      3                              epochs=epochs,
      4                              validation_data=val_tensor)

~\anaconda3\envs\tf-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
   1131          training_utils.RespectCompiledTrainableState(self):
   1132       # Creates a `tf.data.Dataset` and handles batch and epoch iteration.
-> 1133       data_handler = data_adapter.get_data_handler(
   1134           x=x,
   1135           y=y,

~\anaconda3\envs\tf-gpu\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in get_data_handler(*args, **kwargs)
   1362   if getattr(kwargs["model"], "_cluster_coordinator", None):
   1363     return _ClusterCoordinatorDataHandler(*args, **kwargs)
-> 1364   return DataHandler(*args, **kwargs)
   1365 
   1366 

~\anaconda3\envs\tf-gpu\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in __init__(self, x, y, sample_weight, batch_size, steps_per_epoch, initial_epoch, epochs, shuffle, class_weight, max_queue_size, workers, use_multiprocessing, model, steps_per_execution, distribute)
   1152     adapter_cls = select_data_adapter(x, y)
   1153     self._verify_data_adapter_compatibility(adapter_cls)
-> 1154     self._adapter = adapter_cls(
   1155         x,
   1156         y,

~\anaconda3\envs\tf-gpu\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in __init__(self, x, y, sample_weights, sample_weight_modes, batch_size, epochs, steps, shuffle, **kwargs)
    255     inputs = pack_x_y_sample_weight(x, y, sample_weights)
    256 
--> 257     num_samples = set(int(i.shape[0]) for i in nest.flatten(inputs)).pop()
    258     _check_data_cardinality(inputs)
    259 

~\anaconda3\envs\tf-gpu\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py in <genexpr>(.0)
    255     inputs = pack_x_y_sample_weight(x, y, sample_weights)
    256 
--> 257     num_samples = set(int(i.shape[0]) for i in nest.flatten(inputs)).pop()
    258     _check_data_cardinality(inputs)
    259 

~\anaconda3\envs\tf-gpu\lib\site-packages\tensorflow\python\framework\tensor_shape.py in __getitem__(self, key)
    894       else:
    895         if self._v2_behavior:
--> 896           return self._dims[key].value
    897         else:
    898           return self._dims[key]

IndexError: list index out of range

CodePudding user response:

I don't know if it answers the question, but I see a problem in your code. As you are doing binary class classification, you should use :

output = keras.layers.Dense(1, activation='sigmoid')(global_avg_pooling)

and

face_classifier.compile(loss='binary_crossentropy',
                       optimizer=Adam(learning_rate=0.01),
                       metrics=['accuracy'])

For a multi-class problem, you would use output = keras.layers.Dense(n_class, activation='softmax')(global_avg_pooling)

Also, in your code, it is not clear whether the output label is a string or a number.

CodePudding user response:

You were right:

Then I make a training tuple with two lists, one for the labels and one for the image itself. I feel as if this is where I go wrong as the documentation asks for the data to be a tuple of (x, y) and not ([x], [y]) for fitting but I am not so sure.

You should define the data as follows (dummy example)

train_tensor_img = []
train_tensor_label = []
for _ in range(100):
    img, label = decode_img('')
    train_tensor_img.append(img)
    train_tensor_label.append(label)

val_tensor_img = []
val_tensor_label = []
for _ in range(20):
    img, label = decode_img('')
    val_tensor_img.append(img)
    val_tensor_label.append(label)

Then,

history = model.fit(x=np.array(train_tensor_img), y=np.array(train_tensor_label),
                             epochs=5,
                             validation_data=(np.array(val_tensor_img), np.array(val_tensor_label)))

It worked for me.

  • Related