To start out I just want to state that I have read this discussion.
Are floating points uniformly inaccurate across all possible values? Or does inaccuracy increase as the values gets farther and farther away from 0?
To start out I just want to state that I have read this discussion.
Are floating points uniformly inaccurate across all possible values? Or does inaccuracy increase as the values gets farther and farther away from 0?
To understand this, you need to clearly determine what kind of accuracy you are talking about. It is usually a measure of errors occurring in calculation, and I suspect you are not thinking about calculations in only the relevant floating point format.
These are all answers to your question:
The consequence of the last point is that when you start out with slightly large decimal numbers in scientific notation, e.g. 1.123*10^4, these have the same value as an integer and can therefore be converted accurately to binary floating point.