Home > Net >  TypeError: tuple indices must be integers or slices, not str, facing this error in keras model
TypeError: tuple indices must be integers or slices, not str, facing this error in keras model

Time:07-05

I am running a keras model, LINK IS HERE. I have just changed the dataset for this model and when I run my model it throwing this error TypeError: tuple indices must be integers or slices, not str. As it's a image captioning model and the dataset is difficult for me to understand. See the blow code and read also the location of the error.

`reduce_lr = keras.callbacks.ReduceLROnPlateau(
    monitor="val_loss", factor=0.2, patience=3
 )
 # Create an early stopping callback.
 early_stopping = tf.keras.callbacks.EarlyStopping(
 monitor="val_loss", patience=5, restore_best_weights=True 
 )
 history = dual_encoder.fit(
 train_dataloader,
 epochs=num_epochs,
 #validation_data=val_dataloader,
 #callbacks=[reduce_lr, early_stopping],
 )
 print("Training completed. Saving vision and text encoders...")
 vision_encoder.save("vision_encoder")
 text_encoder.save("text_encoder")
 print("Models are saved.")


 TypeError                                 Traceback (most recent call last)
 <ipython-input-31-745dd79762e6> in <module>()
      15 history = dual_encoder.fit(
      16     train_dataloader,
 ---> 17     epochs=num_epochs,
      18     #validation_data=val_dataloader,
      19     #callbacks=[reduce_lr, early_stopping],

  11 frames
  <ipython-input-26-0696c83bf387> in call(self, features, training)
      16         with tf.device("/gpu:0"):
      17             # Get the embeddings for the captions.
 ---> 18             caption_embeddings = text_encoder(features["caption"], training=training)
      19             #caption_embeddings = text_encoder(train_inputs, training=training)
      20         with tf.device("/gpu:1"):

  TypeError: tuple indices must be integers or slices, not str'

The error is pointing to this location caption_embeddings = text_encoder(features["caption"], training=training)

Now I am confused, I don't know whether this error is due to the data which I am passing to my model like this history = dual_encoder.fit(train_dataloader) OR this error is related to caption_embeddings = text_encoder(features["caption"], training=training) and image_embeddings = vision_encoder(features["image"], training=training) which is defined in class DualEncoder.

Because I don't know what are these features["caption"] and features["image"] which is defined in Class DualEncoder as I have not changed these two with my new dataset if You check my CODE HERE IN THIS COLAB FILE.

CodePudding user response:

The dataset (train_dataloader) seems to return a tuple of items: link. In particular, model input is a tuple (images, x_batch_input).

However, your code (in DualEncoder) seems to assume that it's a dict (with keys like "caption", "image", etc). I think that's the source of the mismatch.

  • Related