I'm thinking to add a value close to 0, the so-called "epsilon", to the denominator to prevent zero division error, such as:
double EPS = DBL_MIN;
double no_zerodivision_error = 0.0 / (0.0 + EPS);
When setting this epsilon value, are there any general best practices or considerations to prevent future problems?
Also, if I choose between DBL_MIN
and DBL_EPSILON
, is there a preferred value between the two?
I thought any small numbers would be fine, but I'm afraid I might run into a silent problem in the future that is difficult to spot.
Edit1) In my application, there are many normal cases where the denominator can be zero. That's why I'm not considering throwing an exception.
Edit2) There are some cases such "epsilon" is added to the denominator, such as some deep learning calculations. For example,:
# eps: term added to the denominator to improve numerical stability (default: 1e-8)
torch.optim.Adam(..., eps=1e-08, ...)
There are also SO questions and answers such as this.