I am using Tensorflow-Keras to develop a CNN model in which I have split the data set into train, validation, and test sets. I need to call the test set at the end of each epoch as well as train and validation sets to evaluate the model performance. Below is my code to track train and validation sets.
result_dic = {"epochs": []}
json_logging_callback = LambdaCallback(
on_epoch_begin=lambda epoch, logs: [learning_rate],
on_epoch_end=lambda epoch, logs:
result_dic["epochs"].append({
'epoch': epoch 1,
'acc': str(logs['acc']),
'val_acc': str(logs['val_acc'])
}))
model.fit(x_train, y_train,
validation_data=(x_val, y_val),
batch_size=batch_size,
epochs=epochs,
callbacks=[json_logging_callback])
output:
Epoch 1/5
1/1 [==============================] - 4s 4s/step - acc: 0.8611 - val_acc: 0.8333
However, I'm not sure how to add the test set to my callback to produce the following output.
Expected output:
Epoch 1/5
1/1 [==============================] - 4s 4s/step - acc: 0.8611 - val_acc: 0.8333 - test_acc: xxx
CodePudding user response:
To display your test accuracy after each epoch, you could customize your fit
function to display this metric. Check out this documentation or you could, as shown here, define a simple callback for your test dataset and pass it into your fit
function:
model.fit(x_train, y_train,
validation_data=(x_val, y_val),
batch_size=batch_size,
epochs=epochs,
callbacks=[json_logging_callback,
your_test_callback((X_test, Y_test))])
If you want complete flexibility, you could try using a training loop.