1

I am currently learning Java and have stumbled upon the usage of "BigDecimal". From what I have seen, "BigDecimal" is more precise and therefore it is recommended to use it instead of the normal Double.

My question is: Should I always use "BigDecimal" over Double because it is always better or does it have some disadvantages? And is it worth it for a beginner to switch to "BigDecimal" or is it only recommended when have you more experience with Java?

Zohka
  • 13
  • 2
  • 5
    BigDecimal has far worse performance. Use it only if you need extreme precision, such as when storing money. – VGR Jul 17 '20 at 20:22
  • Same question as [this](https://stackoverflow.com/questions/3413448/double-vs-bigdecimal) – SevvyP Jul 17 '20 at 20:24
  • *therefore it is recommended to use it* **if you need the precision**. The only typical case where this is true is money, and you should probably use something like JMoney there. – chrylis -cautiouslyoptimistic- Jul 17 '20 at 20:31
  • Check out this dicussion as well https://stackoverflow.com/questions/6320209/javawhy-should-we-use-bigdecimal-instead-of-double-in-the-real-world/6320316 double has a certain precision, bigdecimal is the exact way of representing numbers except 1/7, 1/3 etc. As @VGR said BigDecimal usually for money – jwpol Jul 17 '20 at 20:55

1 Answers1

2

double should be used whenever you are working with real numbers where perfect precision is not required. Here are some common examples:

  • Computer graphics, for several reasons: exact precision is rarely required, as few monitors have more than a low four-digit number of pixels; additionally, most trigonometric functions are available only for float and double, and trigonometry is essential to most graphics work
  • Statistical analysis; metrics like mean and standard deviation are typically expected to have at most a little more precision than the individual data points
  • Randomness (e.g. Random.nextDouble()), where the point isn't a specific number of digits; the priority is a real number over some specific distribution
  • Machine learning, where multiplication factors are being learned and specific decimal precision isn't required

For values like money, using double at any point at all is a recipe for a bad time. BigDecimal should generally be used for money and anything else where you care about a specific number of decimal digits, but it has inferior performance and offers fewer mathematical operations.

Louis Wasserman
  • 191,574
  • 25
  • 345
  • 413
  • 1
    BigDecimal should _always_ be used for money? Nono. Usually, `int` or `long` should be used, representing the smallest unit generally considered for transactions. From Satoshis (BTC) to eurocents to dollarcents to yen. Neither BigDecimal nor int/longs can properly divide. BD becomes useful when series of multiplications by rational factors occur on entirely internal systems, which is rare. – rzwitserloot Jul 17 '20 at 21:53
  • 1
    Almost any processing of measured physical quantities, such as lengths and weights. Unless the algorithms being used are unusually ill-conditioned the measurement error will be much larger than `double` rounding error. – Patricia Shanahan Jul 21 '20 at 07:07