I am training a fairly large keras unet model with some obscure data to do semantic image segmentation. The val_loss decreases initially but after some hours it starts to shoot up. It was likely overfitting and I put in more data only to delay the same behavior.
I want to observe what is going on in the validation stage. Is it possible to output a "per sample accuracy and loss" after each epoch since val_loss and val_accuracy is computed after each epoch not batch?
To put it another way, I want to find out the sample (let's assume image or numpy array) that gives the highest accuracy and the lowest accuracy and visualize them. I already have written a custom data generator from which I can output the file/array name/number that is being processed but what I am looking to do has been explained above.
Quite likely, I need some sort of a callback.
Clarification (after comment): by sample I mean a datapoint (or an image in this context) .
The problem at hand is a semantic segmentation problem (with three classes) not a classification one.
So basically, on a high level, I am trying to ask the question: in my validation set, which image is being quantified to have high accuracy/low loss, so I can do a sanity check to see if the functions are making sense?