Home > database >  Invalid argument: indices[124,0] = 2629 is not in [0, 64) (multi-input model)
Invalid argument: indices[124,0] = 2629 is not in [0, 64) (multi-input model)

Time:12-26

I am following Deep Learning with Python section 7.1.2 Multi-input models. Here on code of Listing 7.1, I am facing following errors:

InvalidArgumentError: 2 root error(s) found.
  (0) Invalid argument:  indices[124,0] = 2629 is not in [0, 64)
     [[node functional_11/embedding_8/embedding_lookup (defined at E:/Studies/PythonCode_DLBook/Codes/Chap7_Code2.py:30) ]]
     [[functional_11/embedding_9/embedding_lookup/_16]]
  (1) Invalid argument:  indices[124,0] = 2629 is not in [0, 64)
     [[node functional_11/embedding_8/embedding_lookup (defined at E:/Studies/PythonCode_DLBook/Codes/Chap7_Code2.py:30) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_29208]

Errors may have originated from an input operation.
Input Source operations connected to node functional_11/embedding_8/embedding_lookup:
 functional_11/embedding_8/embedding_lookup/26947 (defined at C:\Users\abdul\anaconda3\envs\PIAIC\lib\contextlib.py:113)

Input Source operations connected to node functional_11/embedding_8/embedding_lookup:
 functional_11/embedding_8/embedding_lookup/26947 (defined at C:\Users\abdul\anaconda3\envs\PIAIC\lib\contextlib.py:113)

Function call stack:
train_function -> train_function

The code used is:

from tensorflow.keras.models import Model
from tensorflow.keras import layers
from tensorflow.keras import Input
import numpy as np

text_vocabulary_size = 10000
question_vocabulary_size = 10000
answer_vocabulary_size = 500

text_input = Input(shape=(100,), dtype='int32', name='text')
embedded_text = layers.Embedding(64, text_vocabulary_size)(text_input)
encoded_text = layers.LSTM(32)(embedded_text)

question_input = Input(shape=(100,),dtype='int32',name='question')
embedded_question = layers.Embedding(32, question_vocabulary_size)(question_input)
encoded_question = layers.LSTM(16)(embedded_question)

concatenated = layers.concatenate([encoded_text, encoded_question],axis=-1)
answer = layers.Dense(answer_vocabulary_size,
activation='softmax')(concatenated)
model = Model([text_input, question_input], answer)
model.compile(optimizer='rmsprop',loss='categorical_crossentropy',metrics=['acc'])

num_samples = 1000
max_length = 100
text = np.random.randint(1, text_vocabulary_size,size=(num_samples, max_length))
question = np.random.randint(1, question_vocabulary_size,size=(num_samples, max_length))
answers = np.random.randint(0, 1,size=(num_samples, answer_vocabulary_size))

model.fit([text, question], answers, epochs=10, batch_size=128)
model.fit({'text': text, 'question': question}, answers,epochs=10, batch_size=128)

I do realize that error is on embedded_text layer because its input mismatch with the data shape coming in it.

However I don't know how to solve this problem, in-fact I don't know at the moment how to set/check for input data shapes and data shapes between different layers. So it would be really helpful if someone shows how to check layer shapes while designing the model and how to resolve these kind of issues.

CodePudding user response:

For TF, a common method is to use model.summary() to check for the output shape at each layer of the network. Running your code returns

Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
text (InputLayer)               [(None, 100)]        0
__________________________________________________________________________________________________
question (InputLayer)           [(None, 100)]        0
__________________________________________________________________________________________________
embedding (Embedding)           (None, 100, 10000)   1000000     text[0][0]
__________________________________________________________________________________________________
embedding_1 (Embedding)         (None, 100, 10000)   1000000     question[0][0]
__________________________________________________________________________________________________
lstm (LSTM)                     (None, 100)          4040400     embedding[0][0]
__________________________________________________________________________________________________
lstm_1 (LSTM)                   (None, 100)          4040400     embedding_1[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate)       (None, 200)          0           lstm[0][0]
                                                                 lstm_1[0][0]
__________________________________________________________________________________________________
dense (Dense)                   (None, 500)          100500      concatenate[0][0]
==================================================================================================
Total params: 10,181,300
Trainable params: 10,181,300
Non-trainable params: 0

So this will be the first step to troubleshoooting, if you would like to see the expected input shape to each layer, model.get_config() is one way to do it. I would refer you to the question here.

Moreover, I would suggest that you read the documentations for layers.LSTM and layers.Embedding to get a solid grasp of the parameters you are passing in and the layer you are creating. Hope this helps with the troubleshooting process :)

  • Related