double
should be used whenever you are working with real numbers where perfect precision is not required. Here are some common examples:
- Computer graphics, for several reasons: exact precision is rarely required, as few monitors have more than a low four-digit number of pixels; additionally, most trigonometric functions are available only for
float
and double
, and trigonometry is essential to most graphics work
- Statistical analysis; metrics like mean and standard deviation are typically expected to have at most a little more precision than the individual data points
- Randomness (e.g.
Random.nextDouble()
), where the point isn't a specific number of digits; the priority is a real number over some specific distribution
- Machine learning, where multiplication factors are being learned and specific decimal precision isn't required
For values like money, using double
at any point at all is a recipe for a bad time.
BigDecimal
should generally be used for money and anything else where you care about a specific number of decimal digits, but it has inferior performance and offers fewer mathematical operations.