Floating point numbers are approximations of real numbers that can represent larger ranges than integers but use the same amount of memory, at the cost of lower precision. If your question is about small arithmetic errors (e.g. why does 0.1 + 0.2 equal 0.300000001?) or decimal conversion errors, please read the tag page before posting.
Many questions asked here about floating point math are about small inaccuracies in floating point arithmetic. To use the example from the excerpt, 0.1 + 0.2
might result in 0.300000001
instead of the expected 0.3
. Errors like these are caused by the way floating point numbers are represented in computers' memory.
Integers are stored as exact values of the numbers they represent. Floating point numbers are stored as two values: a significand and an exponent. It is not possible to find a significand-exponent pair that matches every possible real number. As a result, some approximation and therefore inaccuracy is unavoidable.
Two commonly cited introductory-level resources about floating point math are What Every Computer Scientist Should Know About Floating-Point Arithmetic and the floating-point-gui.de.
FAQs:
Why 0.1 does not exist in floating point
Floating Point Math at https://0.30000000000000004.com/
Related tags:
- ieee-754 (most used standard for floating-point computation)
- half-precision-float (16b float)
- single-precision (32b float)
- double-precision (64b float)
- extended-precision (80b float, usually)
- quadruple-precision (128b float)
- types in c and c++
- aspects of floating point numbers and computations
Programming languages where all numbers are double-precision (64b) floats:
- javascript (see
Number.MAX_SAFE_INTEGER
on MDN and What is JavaScript's highest integer value that a Number can go to without losing precision?) - awk (see Expressions in awk in POSIX)
- lua (up to 5.2 only, 5.3 introduced integers; see Changes in the Language in Lua 5.3 manual)