0

I would like to know the exact difference between BigDecimal and double. I know that BigDecimal is more precise than double and you should use the former for calculations.

Why is it more precise? Why isn't it possible to customize double so that it works like BigDecimal? Or are there any advantages to calculating with double?

Jamal
  • 763
  • 7
  • 22
  • 32
Obl Tobl
  • 5,604
  • 8
  • 41
  • 65

8 Answers8

5

BigDecimal

Immutable, arbitrary-precision signed decimal numbers. A BigDecimal consists of an arbitrary precision integer unscaled value and a 32-bit integer scale. If zero or positive, the scale is the number of digits to the right of the decimal point. If negative, the unscaled value of the number is multiplied by ten to the power of the negation of the scale. The value of the number represented by the BigDecimal is therefore (unscaledValue × 10-scale).

A Double has a certain precision.

EDITED: BigDecimal is a real object, not a primitive one. Thus, it abstracts numerical representation and is not bound by physical (read memory) restrictions. courtesy : Max Leske

Community
  • 1
  • 1
AllTooSir
  • 48,828
  • 16
  • 130
  • 164
  • 3
    Also: `BigDecimal` is a real object, not a primitive one (opposed to `double`. Thus, it abstracts numerical representation and is not bound by physical (read memory) restrictions. – Max Leske Jun 20 '13 at 09:09
4

A double is a remarkably fast floating point data type implemented at a very low level on many chipsets.

Its precision is sufficient for very many applications: e.g. measuring the distance of the sun to Pluto to the nearest centimetre!

Always a performance trade-off when thinking about moving to a more precise data type as the latter will be much slower and your favourite mathematical libraries may not support them. Remember that the outputs of your program are a function of the quality of the inputs.

As a final remark, never use a double to represent cash quantities though!

Bathsheba
  • 231,907
  • 34
  • 361
  • 483
2

Why is BigDecimal more precise?

  1. Double's size in memory is fixed at 64 bits (8 bytes). This limits it to 15 to 17 decimal digits of accuracy. BigDecimal can grow to any size you need it to.

  2. Double operates in binary which means it can only precisely represent numbers which can be expressed as a finite number in binary. For example, 0.375 in binary is exactly 0.011. (To put it another way, it is a sum of powers of 2: it is 2-2 + 2-3.) But a number like 0.1 cannot be precisely represented as a double, because in binary it is 0.0001100110011..., which doesn't terminate. BigDecimal operates in decimal, so it can precisely represent numbers such as 0.1 that we are familiar with in decimal. (However, although this expands the range of precisely representable values, it doesn't eliminate the underlying problem; for example, the value one third cannot be precisely represented by either type, because it has a non-terminating expansion in both binary (0.010101...) and decimal (0.33333...).)

Are there any advantages in calculating with double?

Absolutely!

  1. Since it takes a comparatively tiny amount of memory, double is much better suited for long arrays of numbers.

  2. Double's fixed binary format makes it fast. It can be handled efficiently in both software and hardware. CPUs implement double arithmetic in dedicated circuitry.

  3. With BigDecimal, the object can grow arbitrarily large in memory, because repeated computation can produce ever-longer fractions, but this isn't something you need to worry about with double since its size is fixed.

Why isn't it possible to customize double so that it works like BigDecimal?

You get what you pay for. BigDecimal can be more powerful, but its sophistication makes computation slower, it takes more memory, and it is more complicated to use. There is no numeric data type that satisfies all use cases, so you have to pick which one you want depending on the needs of the application.

Boann
  • 48,794
  • 16
  • 117
  • 146
1

double has 8 bytes to represent the value, its precision is limited to 15 decimal digits, see http://en.wikipedia.org/wiki/IEEE_754-1985. BigDecimal precision is de facto unlimited since it is based on an int array of arbitrary length. Though operations with double are much faster than with BigDecimal this data type should never be used for precise values, such as currency.

Evgeniy Dorofeev
  • 133,369
  • 30
  • 199
  • 275
1

When operations are performed on BigDecimal, the number of digits in the result will frequently be larger than either operand. This has two major effects:

  1. Unless code forces periodic rounding, operations on BigDecimal will get slower and slower as the numbers get longer and longer.

  2. No fixed-size container can possibly be big enough to accommodate a BigDecimal, since many operations between two values which filled up their respective containers would yield a result too long to fit into a container of that size.

The fundamental reason that float and double can be fast, while BigDecimal cannot, is that they are defined as lopping off as much precision as is necessary in any calculation so as to yield a result which will fit in the same size of container as the original operands. This enables them to use fixed-size containers, and not have to worry about succeeding operations becoming progressively slower.

Incidentally, another major (though less fundamental) reason that BigDecimal is slow is that values are represented using a binary-formatted mantissa but a decimal exponent. Consequently, any operations which would require adjusting the precision of their operands must be preceded by a very expensive "normalization" step. The type might be easier to work with if any given value had exactly one representation, and thus adding 123.456 to 0.044 yielded 123.5 rather than 123.500, but normalizing 123.500 to 123.5 would require much more computation than adding of 123.456 and 0.44; further, if that result is added to another number with three significant figures after the decimal point, the normalization performed after the earlier addition would increase the time required to perform the next one.

supercat
  • 77,689
  • 9
  • 166
  • 211
1

By definition, double has a fixed precision (53 digits in base 2). Its main advantage is that it is very fast since in practice the basic operations are entirely implemented in hardware. If such a precision and/or base 2 are not adapted to your application, you can use an arbitrary-precision arithmetic. BigDecimal is standard and uses base 10. However there now are GNU MPFR Java bindings (I haven't tried), so that you can do your computations with the GNU MPFR library, which uses base 2 and should be significantly faster than BigDecimal.

vinc17
  • 2,829
  • 17
  • 23
0

Calculating with double is much faster than with BigDecimal as it's a primative type you also have the full range of mathematical operations at your fingertips.

BigDecimal is great if you need to perform very precise calculations but you pay a price for that such as not having easy access to a square root function that actually works on a BigDecimal and generally much slower calculations.

wobblycogs
  • 4,083
  • 7
  • 37
  • 48
0

As I faced problem with Double is its precision value is limited , so when you use precision more than limitation the value will be rounded off but in case of Big Decimal you will not face this problem

Priya Prajapati
  • 304
  • 1
  • 4
  • 10