Home > Software design >  How can I show the model performance during training?
How can I show the model performance during training?

Time:06-01

I build e UNET model for my research purpose. When I fit the model with my data set for the CNN model or any transfer learning model, I can see the model performance as loss, accuracy, validation loss, and validation accuracy [shown below] per epoch. But for my UNET model, this performance isn't showing.

I want to watch the performance of my model during training for each epoch!

Note:- I have a fragile knowledge of the Tensorflow Framework.

like:

Epoch 1/10
1875/1875 [==============================] - 32s 17ms/step - loss: 0.1992 - accuracy: 0.9395 - val_loss: 0.0711 - val_accuracy: 0.9785
Epoch 2/10
1875/1875 [==============================] - 31s 16ms/step - loss: 0.0694 - accuracy: 0.9788 - val_loss: 0.0454 - val_accuracy: 0.9850
Epoch 3/10
1875/1875 [==============================] - 32s 17ms/step - loss: 0.0507 - accuracy: 0.9839 - val_loss: 0.0333 - val_accuracy: 0.9884
Epoch 4/10
1875/1875 [==============================] - 31s 16ms/step - loss: 0.0403 - accuracy: 0.9868 - val_loss: 0.0360 - val_accuracy: 0.9890
Epoch 5/10
1875/1875 [==============================] - 31s 16ms/step - loss: 0.0342 - accuracy: 0.9888 - val_loss: 0.0337 - val_accuracy: 0.9895
Epoch 6/10
1875/1875 [==============================] - 31s 16ms/step - loss: 0.0283 - accuracy: 0.9909 - val_loss: 0.0301 - val_accuracy: 0.9898
Epoch 7/10
1875/1875 [==============================] - 32s 17ms/step - loss: 0.0245 - accuracy: 0.9922 - val_loss: 0.0260 - val_accuracy: 0.9918
Epoch 8/10
1875/1875 [==============================] - 31s 16ms/step - loss: 0.0222 - accuracy: 0.9930 - val_loss: 0.0290 - val_accuracy: 0.9905
Epoch 9/10
1875/1875 [==============================] - 31s 16ms/step - loss: 0.0188 - accuracy: 0.9934 - val_loss: 0.0302 - val_accuracy: 0.9914
Epoch 10/10
1875/1875 [==============================] - 30s 16ms/step - loss: 0.0169 - accuracy: 0.9944 - val_loss: 0.0388 - val_accuracy: 0.9886

compile:

## instanctiating model
inputs = tf.keras.layers.Input((256, 256, 3))
myTransformer = GiveMeUnet(inputs, droupouts= 0.07)
myTransformer.compile(optimizer = 'Adam', loss = 'binary_crossentropy', metrics = ['accuracy'] )

Fit:

retVal = myTransformer.fit(np.array(framObjTrain['img']), np.array(framObjTrain['mask']), epochs = 100, verbose = 0)

I attached the complete code. If anyone wants to see it: Sample

  • Related