I have been using Keras/TF's CSVLogger
in order to save the training and validation accuracies. Then I was plotting those data to check the trajectory of training and validation accuracy or loss.
Yesterday, I read this link.
Here, they used model.evaluate()
to plot the accuracy metric.
What is the difference between these two approaches?
CodePudding user response:
Log and plot the results
- The
CSVLogger
will save the results of the training in a CSV file. - The output of
fit
, normally calledhistory
, will have the same results ready in a variable, without needing a CSV file.
Their data should be exactly the same.
Evaluate
The evaluate
method is not a log. It's a method that you call any time you want, and will output a single value for each loss and metric you have in compile
for the "current" model.
This is not registered anywhere during training, and this should usually not be called during training, unless you create a custom training loop and manually call evaluate
inside this loop every epoch (this is unnecessary and inconvenient for standard use).
You can use evaluate
with any data you want, including data that was neither in training nor validation data.
The link you shared
Notice that what they plot is history
, and history
is the output of fit
. It has nothing to do with evaluate
.
plot.plot(history.history['acc'])
plot.plot(history.history['val_acc'])
See in the previous page of the link:
history = model.fit(...)
The link doesn't show exactly "where" they call evaluate, and they're probably doing it "only once after training", just to see the final results of the model (not a log of the training)
CodePudding user response:
Here are the big key differences between the two:
model.evaluate() gives a different loss on training data from the one in the training process, while CSVLogger tracks metrics on the validation set after every epoch in the process.