Home > Net >  why training accuracy and validation accuracy oscillate and increase very slowly?
why training accuracy and validation accuracy oscillate and increase very slowly?

Time:12-12

i am working on a text classification problem, i am facing problem of extremely slow increase in training and validation accuracy and also oscillations in it, kindly help me to figure our the problem.

following is information of my data division and model

x_train (47723)
x_test (3372) x_valid (1631) batch size = 64 epochs = 50

model

import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

maxlen=2500
max_features= 40000
embedding_dim=100
num_filters=128
filter_sizes = [1,3,5]
inp = keras.layers.Input(shape=(2500,), dtype="int32")
embedding_layer = keras.layers.Embedding(max_features,
                            embedding_dim,
                             weights=[embedding_matrix],
                            input_length=maxlen,
                            trainable=False)(inp)


print(embedding_layer.shape)
reshape = Reshape((maxlen,embedding_dim,1))(embedding_layer)
print(reshape.shape)
conv_0 = Conv2D(num_filters, kernel_size=(filter_sizes[0], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)
conv_1 = Conv2D(num_filters, kernel_size=(filter_sizes[1], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)
conv_2 = Conv2D(num_filters, kernel_size=(filter_sizes[2], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)

maxpool_0 = MaxPool2D(pool_size=(maxlen - filter_sizes[0]   1, 1), strides=(1,1), padding='valid')(conv_0)
maxpool_1 = MaxPool2D(pool_size=(maxlen - filter_sizes[1]   1, 1), strides=(1,1), padding='valid')(conv_1)
maxpool_2 = MaxPool2D(pool_size=(maxlen - filter_sizes[2]   1, 1), strides=(1,1), padding='valid')(conv_2)


concatenated_tensor = Concatenate(axis=1)([maxpool_0, maxpool_1, maxpool_2])
newdim = tuple([x for x in concatenated_tensor.shape.as_list() if x != 1 and x is not None])
reshape_layer = Reshape(newdim) (concatenated_tensor)
attention = Attention()(reshape_layer)
dropout = Dropout(0.2)(attention)
output = Dense(units=8922, activation='sigmoid')(dropout)

# this creates a model that includes
model = keras.models.Model(inputs=inp, outputs=output)

model.summary()

Model Training

Train...
Epoch 1/50
746/746 [==============================] - 302s 392ms/step - loss: 0.0115 - f1_m: 0.0470 - val_loss: 0.0093 - val_f1_m: 0.0754
Epoch 2/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0080 - f1_m: 0.1368 - val_loss: 0.0086 - val_f1_m: 0.1276
Epoch 3/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0075 - f1_m: 0.1763 - val_loss: 0.0082 - val_f1_m: 0.1459
Epoch 4/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0072 - f1_m: 0.2030 - val_loss: 0.0079 - val_f1_m: 0.1566
Epoch 5/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0070 - f1_m: 0.2232 - val_loss: 0.0077 - val_f1_m: 0.1654
Epoch 6/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0069 - f1_m: 0.2370 - val_loss: 0.0076 - val_f1_m: 0.1794
Epoch 7/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0067 - f1_m: 0.2485 - val_loss: 0.0074 - val_f1_m: 0.2144
Epoch 8/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0066 - f1_m: 0.2603 - val_loss: 0.0073 - val_f1_m: 0.2255
Epoch 9/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0065 - f1_m: 0.2689 - val_loss: 0.0073 - val_f1_m: 0.2131
Epoch 10/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0064 - f1_m: 0.2759 - val_loss: 0.0072 - val_f1_m: 0.2311
Epoch 11/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0064 - f1_m: 0.2841 - val_loss: 0.0072 - val_f1_m: 0.2394
Epoch 12/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0063 - f1_m: 0.2892 - val_loss: 0.0072 - val_f1_m: 0.2298
Epoch 13/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0062 - f1_m: 0.2938 - val_loss: 0.0071 - val_f1_m: 0.2480
Epoch 14/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0062 - f1_m: 0.2991 - val_loss: 0.0071 - val_f1_m: 0.2436
Epoch 15/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0061 - f1_m: 0.3018 - val_loss: 0.0070 - val_f1_m: 0.2595
Epoch 16/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0061 - f1_m: 0.3062 - val_loss: 0.0070 - val_f1_m: 0.2679
Epoch 17/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0060 - f1_m: 0.3095 - val_loss: 0.0070 - val_f1_m: 0.2566
Epoch 18/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0060 - f1_m: 0.3138 - val_loss: 0.0070 - val_f1_m: 0.2655
Epoch 19/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0060 - f1_m: 0.3166 - val_loss: 0.0070 - val_f1_m: 0.2498
Epoch 20/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0059 - f1_m: 0.3192 - val_loss: 0.0070 - val_f1_m: 0.2789
Epoch 21/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0059 - f1_m: 0.3210 - val_loss: 0.0071 - val_f1_m: 0.2638
Epoch 22/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0059 - f1_m: 0.3236 - val_loss: 0.0071 - val_f1_m: 0.2537
Epoch 23/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0058 - f1_m: 0.3255 - val_loss: 0.0069 - val_f1_m: 0.2832
Epoch 24/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0058 - f1_m: 0.3268 - val_loss: 0.0070 - val_f1_m: 0.2770
Epoch 25/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0058 - f1_m: 0.3291 - val_loss: 0.0070 - val_f1_m: 0.2723
Epoch 26/50
746/746 [==============================] - 291s 391ms/step - loss: 0.0058 - f1_m: 0.3300 - val_loss: 0.0069 - val_f1_m: 0.2864
Epoch 27/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0058 - f1_m: 0.3314 - val_loss: 0.0069 - val_f1_m: 0.2779
Epoch 28/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0058 - f1_m: 0.3324 - val_loss: 0.0070 - val_f1_m: 0.2778
Epoch 29/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0057 - f1_m: 0.3344 - val_loss: 0.0070 - val_f1_m: 0.2824
Epoch 30/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0057 - f1_m: 0.3353 - val_loss: 0.0070 - val_f1_m: 0.2656
Epoch 31/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0057 - f1_m: 0.3367 - val_loss: 0.0070 - val_f1_m: 0.2889
Epoch 32/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0057 - f1_m: 0.3376 - val_loss: 0.0070 - val_f1_m: 0.2822
Epoch 33/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0057 - f1_m: 0.3386 - val_loss: 0.0070 - val_f1_m: 0.2904
Epoch 34/50
746/746 [==============================] - 292s 392ms/step - loss: 0.0057 - f1_m: 0.3397 - val_loss: 0.0070 - val_f1_m: 0.2849
Epoch 35/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0057 - f1_m: 0.3407 - val_loss: 0.0070 - val_f1_m: 0.2767
Epoch 36/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0057 - f1_m: 0.3415 - val_loss: 0.0069 - val_f1_m: 0.2957
Epoch 37/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0057 - f1_m: 0.3426 - val_loss: 0.0070 - val_f1_m: 0.2946
Epoch 38/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0056 - f1_m: 0.3441 - val_loss: 0.0070 - val_f1_m: 0.2793
Epoch 39/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0056 - f1_m: 0.3443 - val_loss: 0.0070 - val_f1_m: 0.2928
Epoch 40/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0056 - f1_m: 0.3449 - val_loss: 0.0070 - val_f1_m: 0.2852
Epoch 41/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0056 - f1_m: 0.3447 - val_loss: 0.0070 - val_f1_m: 0.2883
Epoch 42/50
746/746 [==============================] - 292s 392ms/step - loss: 0.0056 - f1_m: 0.3468 - val_loss: 0.0069 - val_f1_m: 0.3026
Epoch 43/50
333/746 [============>.................] - ETA: 2:40 - loss: 0.0056 - f1_m: 0.3519

CodePudding user response:

Have you tried to increase the learning rate? It can be a reason of why it learns so slowly.

CodePudding user response:

Is there any reason for using convolution layers in your architecture?. I think your data format is text and convolution layers are used for image data.

I think you should try using only dense layer and see if there is any change in accuracy.

  • Related