I am attempting to do a sandbox project using the wine reviews dataset and want to combine both text data as well as some engineered numeric features into the neural network, but I am receiving a value error.
The three sets of features I have are the description (the actual reviews), scaled price, and scaled number of words (length of description). The y target variable I converted into a dichotomous variable representing good or bad reviews turning it into a classification problem.
Whether or not these are the best features to use is not the point, but I am hoping to try to combine NLP with meta or numeric data. When I run the code with just the description it works fine, but adding the additional variables is causing a value error.
y = df['y']
X = df.drop('y', axis=1)
# split up the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
X_train.head();
description_train = X_train['description']
description_test = X_test['description']
#subsetting the numeric variables
numeric_train = X_train[['scaled_price','scaled_num_words']].to_numpy()
numeric_test = X_test[['scaled_price','scaled_num_words']].to_numpy()
MAX_VOCAB_SIZE = 60000
tokenizer = Tokenizer(num_words=MAX_VOCAB_SIZE)
tokenizer.fit_on_texts(description_train)
sequences_train = tokenizer.texts_to_sequences(description_train)
sequences_test = tokenizer.texts_to_sequences(description_test)
word2idx = tokenizer.word_index
V = len(word2idx)
print('Found %s unique tokens.' % V)
Found 31598 unique tokens.
nlp_train = pad_sequences(sequences_train)
print('Shape of data train tensor:', nlp_train.shape)
Shape of data train tensor: (91944, 136)
# get sequence length
T = nlp_train.shape[1]
nlp_test = pad_sequences(sequences_test, maxlen=T)
print('Shape of data test tensor:', nlp_test.shape)
Shape of data test tensor: (45286, 136)
data_train = np.concatenate((nlp_train,numeric_train), axis=1)
data_test = np.concatenate((nlp_test,numeric_test), axis=1)
# Choosing embedding dimensionality
D = 20
# Hidden state dimensionality
M = 40
nlp_input = Input(shape=(T,),name= 'nlp_input')
meta_input = Input(shape=(2,), name='meta_input')
emb = Embedding(V 1, D)(nlp_input)
emb = Bidirectional(LSTM(64, return_sequences=True))(emb)
emb = Dropout(0.40)(emb)
emb = Bidirectional(LSTM(128))(emb)
nlp_out = Dropout(0.40)(emb)
x = tf.concat([nlp_out, meta_input], 1)
x = Dense(64, activation='swish')(x)
x = Dropout(0.40)(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(inputs=[nlp_input, meta_input], outputs=[x])
#next, create a custom optimizer
optimizer1 = RMSprop(learning_rate=0.0001)
# Compile and fit
model.compile(
loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
print('Training model...')
r = model.fit(
data_train,
y_train,
epochs=5,
validation_data=(data_test, y_test))
I apologize if that was over-kill but wanted to make sure I didn't leave out any relevant clues or information that would potentially be helpful. The error I get from running the code is
ValueError: Layer model expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 138) dtype=float32>]
How do I resolve that error?
CodePudding user response:
Thank you for posting all your code. These two lines are the problem:
data_train = np.concatenate((nlp_train,numeric_train), axis=1)
data_test = np.concatenate((nlp_test,numeric_test), axis=1)
A numpy array is interpreted as one input regardless of its shape.
Either use tf.data.Dataset
and feed your dataset directly to your model:
train_dataset = tf.data.Dataset.from_tensor_slices((nlp_train, numeric_train))
labels = tf.data.Dataset.from_tensor_slices(y_train)
dataset = tf.data.Dataset.zip((train_dataset, train_dataset))
r = model.fit(dataset, epochs=5)
Or just feed your data directly to model.fit()
as a list of inputs:
r = model.fit(
[nlp_train, numeric_train],
y_train,
epochs=5,
validation_data=([nlp_test, numeric_test], y_test))