Home > Back-end >  getting increase in val-loss and decrease in val-accuracy while running a deep learning model for te
getting increase in val-loss and decrease in val-accuracy while running a deep learning model for te

Time:04-01

I am trying to classify text with label 0,1 and doing it with Bi-lstm. Its giving me a bit good accuracy on training time but when it comes to validation the loss goes to increase and validation accuracy tends to decrease.. please suggest me some solution how I can I improve it. shape of data: (1043708, 2)

here is my model

model=tf.keras.Sequential([
    # add an embedding layer
    tf.keras.layers.Embedding(word_count, 16, input_length=max_len),
     # add dropout layer to prevent overfitting
    tf.keras.layers.Dropout(0.2),
    # add the bi-lstm layer
    tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64,return_sequences=True)),
    # add a dense layer
    tf.keras.layers.Dense(32, activation=tf.keras.activations.relu),
    tf.keras.layers.Dense(32, activation=tf.keras.activations.relu),
    tf.keras.layers.Dense(32, activation=tf.keras.activations.softmax),
    # add the prediction layer
    tf.keras.layers.Dense(1, activation=tf.keras.activations.sigmoid),
])

model.compile(loss=tf.keras.losses.BinaryCrossentropy(), optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy'])

model.summary()
history = model.fit(XPAD_train, Y_train, validation_data=(XPAD_test, Y_test), epochs = 10, batch_size=batch_size, callbacks = [callback_func], verbose=1)

CodePudding user response:

As you say in the comment if you want cross-validation. You can use sklearn.model_selection.KFold like below and train you model on each X_train, y_train, X_test, y_test like below:

from sklearn.model_selection import KFold
import numpy as np
X = np.array(["I'm good.", "I'm very good", "I'm bad", "I'm very bad"])
y = np.array(["pos", "pos", "neg", "neg"])
k_fold = KFold(n_splits=4)
for train_idx, test_idx in k_fold.split(X):
    print(f'Train_idx: {train_idx} | Test_idx: {test_idx}')
    print(f'X_train : {X[train_idx]}')
    print(f'y_train : {y[train_idx]}')
    print(f'X_test  : {X[test_idx]}')
    print(f'y_test  : {y[test_idx]}')
    print()

Output:

Train_idx: [1 2 3] | Test_idx: [0]
X_train : ["I'm very good" "I'm bad" "I'm very bad"]
y_train : ['pos' 'neg' 'neg']
X_test  : ["I'm good."]
y_test  : ['pos']

Train_idx: [0 2 3] | Test_idx: [1]
X_train : ["I'm good." "I'm bad" "I'm very bad"]
y_train : ['pos' 'neg' 'neg']
X_test  : ["I'm very good"]
y_test  : ['pos']

Train_idx: [0 1 3] | Test_idx: [2]
X_train : ["I'm good." "I'm very good" "I'm very bad"]
y_train : ['pos' 'pos' 'neg']
X_test  : ["I'm bad"]
y_test  : ['neg']

Train_idx: [0 1 2] | Test_idx: [3]
X_train : ["I'm good." "I'm very good" "I'm bad"]
y_train : ['pos' 'pos' 'neg']
X_test  : ["I'm very bad"]
y_test  : ['neg']
  • Related