I have to calculate a fairly complex formula in my code, and I'm wondering how to decide whether I should use BigDecimal
or just double
to do it.
The function is is:
f(x) = 1.03^(4 - ((1/3) * (x-9)^2))
where x
is a double rounded to 4 decimal points, e.g. 1.2345
what are the pros and cons of using BigDecimal
versus double
and how much precision might I lose if I use double
to represent everything?