When training a model usually the model is trained with a loss function and an accuracy metric.
Is there a negative effect by using a second loss function as an accuracy metric? (ex: mean_absolute_error)
When training a model usually the model is trained with a loss function and an accuracy metric.
Is there a negative effect by using a second loss function as an accuracy metric? (ex: mean_absolute_error)
By using a second loss function as an accuracy metric the network will be trained the same way as before (since you keep your loss function). The only thing that changes is your accuracy metric, which gives you an idea of how good the network performs. Thus your accuracy metric might show no improvements what so ever, just because the network does not know about the error you're looking at.
Generally speaking it is always a good idea to have a look on multiple accuracy metrics, since you get a more complete idea of what your network is learning or not learning. Though always keep in mind which loss function you actually deploy for Training your network.
Another scenario could be, that you want to keep track of accuracy metrics which are not differentiable, e.g. the IoU of bounding boxes. You would then use this as a second accuracy metric, while using a differentiable loss function.