0

I am training a neural network for Image Classification. I have observed that when I compare my 11th and 14th epoch, the validation accuracy increased even though my validation loss increased. I am using a callback monitoring validation loss with mode set to min. Since this happened, I am confused if I should use a callback monitoring validation accuracy with mode set to max. Which of these is better to adopt?

Jitesh Malipeddi
  • 2,150
  • 3
  • 17
  • 37
  • 1
    When the loss decreases, the accuracy is indeed expected to increase. What exactly is the issue (or the surprise) here? – desertnaut Jul 31 '19 at 14:00
  • I am so sorry, I made a serious mistake in the question. I actually meant accuracy surprisingly increases even though my validation loss increases. Could you please remove the downvote. – Jitesh Malipeddi Jul 31 '19 at 14:10
  • 3
    Please put the training log as text, not a screenshot of running the notebook. – dedObed Jul 31 '19 at 14:19
  • 1
    Attaching images is not how SO works, and I regret to say it is enough reason for downvoting; furthermore, it *can* occasionally happen that loss & accuracy are simultaneously increasing (or decreasing) in *small* quantities (like here) - please see own answer at [Loss & accuracy - Are these reasonable learning curves?](https://stackoverflow.com/questions/47817424/loss-accuracy-are-these-reasonable-learning-curves/47819022#47819022) for the delicate interplay between these 2 quantities. – desertnaut Jul 31 '19 at 14:25

0 Answers0