1

To start out I just want to state that I have read this discussion.

Are floating points uniformly inaccurate across all possible values? Or does inaccuracy increase as the values gets farther and farther away from 0?

Community
  • 1
  • 1
TaylorE
  • 159
  • 1
  • 6

1 Answers1

0

To understand this, you need to clearly determine what kind of accuracy you are talking about. It is usually a measure of errors occurring in calculation, and I suspect you are not thinking about calculations in only the relevant floating point format.

These are all answers to your question:

  • The precision - expressed in number of significant bits - of floating point numbers is constant over most of the range. (Only for denormal numbers the precision reduces as the number gets smaller.)
  • The accuracy of floating point operations is typically limited by the precision, so mostly constant over the range. See the previous point.
  • The accuracy by which you can convert decimal numbers to binary floating point will be higher for integers than for numbers with a fractional component. This is because integers can be represented as some multiple of powers of two, while decimal fractions can't be represented as a multiple of negative powers of two. (The typical example is that 0.1 becomes a repeating fraction in binary floating point).

The consequence of the last point is that when you start out with slightly large decimal numbers in scientific notation, e.g. 1.123*10^4, these have the same value as an integer and can therefore be converted accurately to binary floating point.

Casperrw
  • 511
  • 2
  • 7