2

I'm trying to understand the bias and variance more.

I'm wondering if there is a loss function considering bias and variance.
As far as I know, the high bias makes underfit, and the high variance makes overfit. enter image description here

the image from here

If we can consider the bias and variance in the loss, it could be like this, bias(x) + variance(x) + some_other_loss(x). And my curious point is two-part.

  1. Is there a loss function considering bias and variance?
  2. If the losses we normally have used already considered the bias and variance, how can I measure the bias and variance separately in scores?

This kind of question could be a fundamental mathematical question, I think. If you have any hint for that, I'll really appreciate it.

Thank you for reading my weird question.


After writing the question, I realized that regularization is one of the ways to reduce the variance. Then, 3) is it the way to measure the bias in a score?

Thank you again.


Update at Jan 16th, 2022

I have searched a little bit and answered myself. If there are wrong understandings, please comment below.

      1. Bais is represented by loss value during training, so we don't need an additional bias loss function.

But for the variance, there is no way to score, because if we want to measure it we should get the training loss and unseen data's loss. But once we use the unseen data as a training loss, the unseen data be seen data. So this will are not unseen data anymore in terms of the model. So as far as I understand, there is no way to measure variance for training loss.

I hope other people can be helped and please comment your thinking if you have.

tucan9389
  • 168
  • 2
  • 10

1 Answers1

0

As you have clearly stated that high bias -> model is underfitting in comparison to a good fit, and high variance -> over fitting than a good fit.

Measuring either of them requires you to know the good fit in advance, which happens to be the end goal of training a model. Hence, it is not possible to measure underfitting or over fitting during training itself. However, if you can have an idea of a target amount of loss, you can use an early stopping callback to stop around the good fit.

jdsurya
  • 1,326
  • 8
  • 16