0

I understand what an underflow is and when it could occur based on this.

However, my question is that given that an underflow has occurred, what determines the amount of loss in precision?

I'm computing an array in C++ and I do see some numerical errors such as 1.333e-18 when the array should actually be 0 everywhere. However, the numerical error is different across the array. Is there a rule to determine the amount of precision loss?

tryingtosolve
  • 763
  • 5
  • 20
  • 1
    @JGroven I'm asking the way to determine the *amount* of precision loss though? Not why, but how much? – tryingtosolve Mar 02 '18 at 22:22
  • 1
    The first answer has a link that talks about the normalization and de-normalization process, which (I think) is what you're looking for. – JGroven Mar 02 '18 at 22:24
  • Consider fixed point math as well if you need a firm control of digit representation and error. – Michael Dorgan Mar 02 '18 at 22:40
  • You start losing significant digits at 1.2e-38 and have none left at 1.4e-45. 1.3e-18 is not a denormal value, just the normal outcome of imprecise calculation. If you know that the mathematical result should be 0 then you have the unusual convenience of knowing the absolute error. – Hans Passant Mar 02 '18 at 22:50
  • It depends: `123456789.0f - 1.0f` will give you an unexpected result. – Richard Critten Mar 02 '18 at 22:53

0 Answers0