3

I know that this question has already been discussed several times but I am not entirely satisfied with the answer. Please don't respond "Doubles are inaccurate, you can't represent 0.1! You have to use BigDecimal"...

Basically I am doing a financial software and we needed to store a lot of prices in memory. BigDecimal was too big to fit in the cache so we have decided to switch to double. So far we are not experiencing any bug for the good reason and we need an accuracy of 12 digits. The 12 digits estimations is based on the fact that even when we talk in million, we are still able to deal with cents.

A double gives a 15 significant decimal digit precision. If you round your doubles when you have to display/compare them, what can goes wrong??

I guess on problem is the accumulation of the inaccuracy, but how bad is it? How many operations will it take before it affect the 12th digit?

Do you see any other problems with doubles?

EDIT: About long, that's definitely something that we have thinked about. We are doing a lot of division multiplication and long won't deal well with that (losing the decimal and overflow), or at least you have to be very very careful with what you do. My question is more about the theory of doubles, basically how bad is it and is the inaccuracy acceptable?

EDIT2: Don't try to solve my software, I am fine with inaccuracy :). I re-word the question : How likely an inaccuracy will happen if you need only 12digits and that you round doubles when displaying/comparing?

tibo
  • 5,326
  • 4
  • 37
  • 53
  • 2
    cannot you just store the whole amount of cents? – Vlad Nov 14 '13 at 13:04
  • Just because you say "do not answer with x" that doesnt meen that x is a bad answer, as x might be and in this case IS the correct answer to your question – LionC Nov 14 '13 at 13:09
  • "`BigDecimal` was too big to fit in the cache" -- what :| – BartoszKP Nov 14 '13 at 13:12
  • @BartoszKP A BigDecimal is 32 bytes, 4 times that of a double. Perhaps they have a very large set of data – Ron Nov 14 '13 at 13:13
  • @RonE Yes, but how come this is even an issue? If you need to count your memory in single bytes then perhaps don't use Java at all, and stick to ASM or C ;0 Otherwise there is something wrong with memory complexity of your approach, not with using a `BigDecimal` type. – BartoszKP Nov 14 '13 at 13:15
  • @BartoszKP when you have to store on the heap 10million prices then you are talk in GB – tibo Nov 14 '13 at 13:17
  • 1
    @tibo That's what I've said - "there is something wrong with memory complexity of your approach". Even if you save these 24 bytes, it will still explode when you'll have 40 million prices not 10... – BartoszKP Nov 14 '13 at 13:18
  • 1
    did you consider to use integers? they are perfect to represent any fixed point numbers – user902383 Nov 14 '13 at 13:18
  • 1
    @tibo 10,000,000 * 32 bytes = 320 Megabytes, so that's not really a reason not to use BigDecimal. – Kayaman Nov 14 '13 at 13:20
  • 1
    I have to side with BartoszKP, if your Application runs Out of Memory because you have too much stuff loaded, it is a problem of your approach and not of the data type used. even if you can make it a quarter usage, your application still takes way too much RAM and is not scalable at all. At this scale of data you have to consider using slower memory than the RAM to store your data – LionC Nov 14 '13 at 13:23
  • @LionC Yep, OP can fight all he wants. You can't use floating point for money. – Ron Nov 14 '13 at 13:30
  • @LionC that has nothing to do with opinions! I want facts and mathematics! – tibo Nov 14 '13 at 13:30
  • It is just that no one answer my core question... I have upvoted some good points and I down vote the too simple answers. My question is about the theory behind double and how probable an inaccuracy is. – tibo Nov 14 '13 at 13:33
  • 1
    @tibo and a lot of people answered with the IEEE 754 reference which describes the complete mathematical problem behind it, for example Ron E did and he was downvoted – LionC Nov 14 '13 at 13:35
  • 1
    @tibo I show exactly how you can go wrong. Adding 10 cents 10^15 times leaves you short 1 cent. – Ron Nov 14 '13 at 13:39
  • @RonE I really appreciate your answer. I have commented and that's exactly where I wanted the discussion to go. – tibo Nov 14 '13 at 13:46
  • Sorry my extrapolation was wrong, it seems that the problem compounds and you lose a whole cent after only 100,000,000 additions (10^8) – Ron Nov 14 '13 at 13:58
  • 5
    With regards to doubles being "bad", see [my answer](http://stackoverflow.com/a/19907216/2187042); basically doubles are no worse than our own decimal system (try to represent 1/3 in decimal). The two systems simply have different numbers that they "like". It is within this context that you should consider doubles. The financial system just has a particular love of 1/100 which happens to be exactly representable in decimal but not binary – Richard Tingle Nov 14 '13 at 14:00
  • Good point @RichardTingle. The benefit of BigDecimal is that you control the accuracy with the MathContext. It is still not exact but you control it. – tibo Nov 14 '13 at 14:12
  • 4
    This is somewhat of a hack, but if you *must must must* use doubles then rounding to 2 significant figures every operation would supress the build up of error – Richard Tingle Nov 14 '13 at 14:23
  • 4
    The question “How many operations will it take before it affect the 12th digit?” cannot be answered without additional information, notably which operations are to be performed and with what values, particularly since the question mentions division and multiplication, not just adding and subtracting amounts of money. – Eric Postpischil Nov 14 '13 at 15:03
  • @EricPostpischil that's right. I was more looking for a worst/best/average scenario – tibo Nov 15 '13 at 02:58
  • possible duplicate of [Which data type to use for manipulating currency](http://stackoverflow.com/questions/15865012/which-data-type-to-use-for-manipulating-currency) – Raedwald Nov 15 '13 at 07:25
  • @tibo: The worst case is that three operations can produce arbitrarily large error, depending on some assumptions. The best is there can be no error after arbitrarily many operations. The average cannot be known without more information. To use **any** numerical arithmetic system, you must understand it and fit your software design to it. – Eric Postpischil Nov 15 '13 at 12:23
  • What I hear is: "How can I represent a discrete amount (money) using a continuous number type (doubles) that's actually based on a different discrete representation (IEEE754)?" And my answer is just no. How likely is an inaccuracy you ask? It's like asking a car manufacturer how many times I can crash their car before it stops working (they don't care because you shouldn't do it, and once may be enough). – Ron Feb 27 '15 at 00:31

8 Answers8

14

If you absolutely can't use BigDecimal and would prefer not to use doubles, use longs to do fixed-point arithmetic (so each long value would represent the number of cents, for example). This will let you represent 18 significant digits.

I'd say use joda-money, but this uses BigDecimal under the covers.


Edit (as the above doesn't really answer the question):

Disclaimer: Please, if accuracy matters to you at all, don't use double to represent money. But it seems the poster doesn't need exact accuracy (this seems to be about a financial pricing model which probably has more than 10**-12 built-in uncertainty), and cares more about performance. Assuming this is the case, using a double is excusable.

In general, a double cannot exactly represent a decimal fraction. So, how inexact is a double? There's no short answer for this.

A double may be able to represent a number well enough that you can read the number into a double, then write it back out again, preserving fifteen decimal digits of precision. But as it's a binary rather than a decimal fraction, it can't be exact - it's the value we wish to represent, plus or minus some error. When many arithmetic operations are performed involving inexact doubles, the amount of this error can build up over time, such that the end product has fewer than fifteen decimal digits of accuracy. How many fewer? That depends.

Consider the following function that takes the nth root of 1000, then multiplies it by itself n times:

private static double errorDemo(int n) {
    double r = Math.pow(1000.0, 1.0/n);
    double result = 1.0;
    for (int i = 0; i < n; i++) {
        result *= r;
    }
    return 1000.0 - result;
}

Results are as follows:

errorDemo(     10) = -7.958078640513122E-13
errorDemo(     31) = 9.094947017729282E-13
errorDemo(    100) = 3.410605131648481E-13
errorDemo(    310) = -1.4210854715202004E-11
errorDemo(   1000) = -1.6370904631912708E-11
errorDemo(   3100) = 1.1107204045401886E-10
errorDemo(  10000) = -1.2255441106390208E-10
errorDemo(  31000) = 1.3799308362649754E-9
errorDemo( 100000) = 4.00075350626139E-9
errorDemo( 310000) = -3.100740286754444E-8
errorDemo(1000000) = -9.706695891509298E-9

Note that the size of the accumulated inaccuracy doesn't increase exactly in proportion to the number of intermediate steps (indeed, it's not monotonically increasing). Given a known series of intermediate operations we can determine the probability distribtion of the inaccuracy; while this will have a wider range the more operations there are, the exact amount will depend on the numbers fed into the calculation. The uncertainty is itself uncertain!

Depending on what kind of calculation you're performing, you may be able to control this error by rounding to whole units/whole cents after intermediate steps. (Consider the case of a bank account holding $100 at 6% annual interest compounded monthly, so 0.5% interest per month. After the third month of interest is credited, do you want the balance to be $101.50 or $101.51?) Having your double stand for the number of fractional units (i.e. cents) rather than the number of whole units would make this easier - but if you're doing that, you may as well just use longs as I suggested above.

Disclaimer, again: The accumulation of floating-point error makes the use of doubles for amounts of money potentially quite messy. Speaking as a Java dev who's had the evils of using double for a decimal representation of anything drummed into him for years, I'd use decimal rather than floating-point arithmetic for any important calculations involving money.

Community
  • 1
  • 1
pobrelkey
  • 5,853
  • 20
  • 29
  • See may edit, that's a very good point but that's not what I am looking for ;) – tibo Nov 14 '13 at 13:09
  • 1
    considering percental interest rates this is still a problem (1 * 1.05 is 1.05, qhich is 1.05 cents and nothing less) – LionC Nov 14 '13 at 13:13
  • Expanded to try to better answer the original question. – pobrelkey Nov 14 '13 at 15:03
  • Really like your code example, good start for estimating roughly how bad is the accumulation of inaccuracy. NB, you can do the same kind of reasoning with BigDecimal (and divide them by 3 or 7 or 9 or....). About long I think that it is the worst solution, divide it by 3 and you result is truncated, Multiply 2 long together and you end up with a long overflow... – tibo Nov 15 '13 at 02:55
7

Martin Fowler wrote something on that topic. He suggests a Money class with internal long representation, and a decimal factor. http://martinfowler.com/eaaCatalog/money.html

Stroboskop
  • 4,327
  • 5
  • 35
  • 52
  • 1
    Wrapper Class <==> Object Pointer overhead <==> my cache size is multiplied by at least 2 – tibo Nov 14 '13 at 13:11
  • Yep, but you get more reliable code, validation and other things. – Stroboskop Nov 20 '13 at 17:48
  • On the other hand performance and memory usage are getting worse. Depends on the way you want to use it. For a server processing millions of amounts it's not a good idea - in a GUI its built in validation can help a lot, e.g. preventing users from putting mixed currencies in one bag or using decimals on a Yen... – Stroboskop Nov 20 '13 at 17:54
  • The Money class sounds unnecessary, it is similar to the implementation of BigDecimal. A BigInteger (unlimited precision non-fractional integer, accomplished with a char[]) and a decimal position value. – Ron Nov 23 '13 at 05:38
  • If you just want to store amounts, it doesn't add anything. But if you are using different currencies, you can implement sanity checks or use the exact number of fraction digits allowed. E.g. Japanese Yen has no "cents". If you don't do it you might end up with half a Yen. – Stroboskop Nov 26 '13 at 13:29
7

Without using fixed point (integer) arithmetic you can NOT be sure that your calculations are ALWAYS correct. This is because of the way IEEE 754 floating point representation works, some decimal numbers cannot be represented as finite-length binary fractions. However, ALL fixed point numbers can be expressed as a finite length integer; therefore, they can be stored as exact binary values.

Consider the following:

public static void main(String[] args) {
    double d = 0.1;
    for (int i = 0; i < 1000; i++) {
        d += 0.1;
    }
    System.out.println(d);
}

This prints 100.09999999999859. ANY money implementation using doubles WILL fail.

For a more visual explanation, click the decimal to binary converter and try to convert 0.1 to binary. You end up with 0.00011001100110011001100110011001 (0011 repeating), converting it back to decimal you get 0.0999999998603016138.

Therefore 0.1 == 0.0999999998603016138


As a sidenote, BigDecimal is simply a BigInteger with an int decimal location. BigInteger relys on an underlying int[] to hold its digits, therefore offering fixed point precision.

public static void main(String[] args) {
    double d = 0;
    BigDecimal b = new BigDecimal(0);
    for (long i = 0; i < 100000000; i++) {
        d += 0.1;
        b = b.add(new BigDecimal("0.1"));
    }
    System.out.println(d);
    System.out.println(b);
}

Output:
9999999.98112945 (A whole cent is lost after 10^8 additions)
10000000.0

Ron
  • 1,450
  • 15
  • 27
  • "A double gives a 15 significant decimal digit precision." I didn't say that double are accurate and I know that I have to deal with that – tibo Nov 14 '13 at 13:25
  • 5
    My point is that the precision doesn't matter. Even with 150 significant digit precision you will go wrong after 10000 additions. There is no way any floating-point type can be used successfully in this situation. – Ron Nov 14 '13 at 13:26
  • +1 for an excellent demonstration of the problem, and for addressing the question ("how bad is the inaccuracy of double") directly. Why on earth was this answer downvoted? – RichieHindle Nov 14 '13 at 13:30
  • @Richie because OP downvotes all people answering with "Doubles are inaccurate" because he wrote "do not answer with 'Doubles are inaccurate'" even if its still the correct answer – LionC Nov 14 '13 at 13:31
  • @RichieHindle Thank you, I think most people struggle to see exactly _why_ doubles don't work for money. It doesn't just come into play when you get into the millions and billions. – Ron Nov 14 '13 at 13:32
  • Thanks @RonE, this version of your answer is much more what I am looking for (Upvote ;) ). After 1000 operation your result is accurate to 14 significant digits. Knowing that I need only 12 digits that means you need several million(billion of operation before making the inaccuracy visible). To me and a lot of people I guess that's completely fine! – tibo Nov 14 '13 at 13:44
  • 1
    @tibo Wait until your software has been in operation for years and pennies start appearing/missing. – Ron Nov 14 '13 at 13:46
  • @RonE I never store that ;) It's only hourly/daily calculations which are stored in a cache. We have an other server which compute other stuff and on this side we use BigDecimal/ – tibo Nov 14 '13 at 13:50
  • 1
    @tibo Okay I think I understand what you're getting at. Consider BartoszKP's comment though: If increasing your space complexity by less than a factor of 4 exceeds your limit, perhaps you are looking at the wrong type of solution. – Ron Nov 14 '13 at 13:55
  • Does “Without using fixed point arithmetic you can NEVER be sure that your calculations are correct” mean that fixed-point multiplication is “correct”? – Pascal Cuoq Nov 14 '13 at 14:00
  • @PascalCuoq Yes, fixed-point multiplication is absolutely correct to infinite precision. However, fixed-point division may not be. The real issue here is the base conversion of a fractional number from decimal to binary. – Ron Nov 14 '13 at 14:06
  • 1
    @RonE about your sidenote, you initialise your BigDecimal with a float so you have the inaccuracy. Initialize it with the string "0.1" and I am prety sure that the problem will go away ;) – tibo Nov 14 '13 at 14:08
  • @tibo You're absolutely right, I'll amend my answer. – Ron Nov 14 '13 at 14:11
  • @RonE Good, I guess that thanks to fixed-point, I won't observe any imprecision while computing the compound interest over several years for an annual rate of 7%, then. I am glad to hear that. – Pascal Cuoq Nov 14 '13 at 14:14
  • @PascalCuoq With the default MathContext (Unlimited)), definitely with BigDecimal you'll end up with an exact value. You can try, just don't forget to initialise BigDecimal with exact value or String ;) – tibo Nov 14 '13 at 14:18
  • @tibo I am not saying that `BigDecimal` does not allow it, I am saying that this is not “fixed-point” and that the sentence “Without using fixed point arithmetic you can NEVER be sure that your calculations are correct” is strange, considering that fixed-point multiplication is approximate just like floating-point multiplication is approximate. – Pascal Cuoq Nov 14 '13 at 14:30
  • 3
    The first statement, that “Without using fixed point (integer) arithmetic, you can NEVER be sure that your calculations are correct”, is false. IEEE-754 is well specified and is susceptible to mathematical proof. Calculations can be designed that produce correct results, and provably so, for particular circumstances. The fact that .01 is not exactly represented does not mean it is impossible to design calculations that temporarily contain inaccuracies but whose results are designed well enough that exact values can be produced in the end. – Eric Postpischil Nov 14 '13 at 15:00
  • @EricPostpischil What I meant by that statement was that (_without knowledge of the manipulations_) "you can never be sure". I agree we can increase precision through algorithmic means, but I'm not sure the OP was looking into this here. – Ron Nov 15 '13 at 06:28
  • @EricPostpischil: Even simpler than that, if one scales calculations so that all precisely-known values will be whole numbers, round-off errors will not occur for values below 2^52. Nowadays there's often not much advantage to using `double` for such cases, even when they would work, since 64-bit types are widely available. Historically, however, there have been many language implementations in which the largest integer type was 32 bits, and doubles were useful for handling whole numbers bigger than that. – supercat Nov 21 '13 at 22:01
  • @supercat: We need to put a stop to this myth that round-off errors do not occur with scaled values. Neither binary floating-point nor decimal floating-point nor integer arithmetic avoid rounding errors when calculating unit price of a three-for-one deal, when converting currencies, when converting annual interest rates to monthly, or when calculating scientific functions such as sine or logarithm. **There is no numerical arithmetic system without rounding errors.** – Eric Postpischil Nov 21 '13 at 22:08
  • @EricPostpischil: Sure there are. Within the whole numbers, 7 divided by 3 is two remainder one, precisely. The key to avoiding round-off errors is to keep track of everything in precise terms. If someone buys a bunch of items at "3 for $1", keep track of the number of items bought at that price; charge $0.34 for the first, $0.33 each for the next two, $0.34 for the fourth, $0.33 for the next two, etc. The price for a single item isn't $0.33 1/3; it's $0.34, but paying full price entitles the customer to a discount on the next two items. – supercat Nov 21 '13 at 22:20
  • @supercat: “Keep track of the number of items bought at that price” does not calculate unit price. It solves a very limited problem: Charging at a check-out stand. It does not provide a unit price that can be used later in general mathematical formula. The point is that the notion of “scale calculations so that all precisely-known values will be whole numbers” does not work. Even if everything is whole numbers, there are very simple calculations that still cannot be performed accurately. Scaling is not an automatic solution to rounding errors. It solves a very limited set of problems. – Eric Postpischil Nov 21 '13 at 23:34
  • @EricPostpischil: If one wishes to have books which balance perfectly, what is necessary is not to use infinite precision or rational-number types, but rather to define a representable answer as being the "correct" one. Consider, for example, that someone is buying gasoline at one of the old-fashioned mechanical pumps. If the first tens-of-pennies digit reads $0.70 and the units digit reads just just past the 6, the cost of the gas isn't $0.762; it's $0.77, *precisely*. That the customer could have gotten a smidgin more gas for the same price does not make the $0.77 price inaccurate. – supercat Nov 22 '13 at 17:26
  • @EricPostpischil: The clerk isn't going to collect $0.762 from the customer; the clerk is going to collect $0.77. If the store sells exactly 1,000 gallons at $0.999/gallon and ends up collecting $1,000.17, the extra 18 cents isn't a "rounding error"; its essentially a charge imposed on customers who fail to dispense all the gas they can for the amount of money they pay. – supercat Nov 22 '13 at 17:33
  • @supercat: Those statements are only about rounding final results. They have nothing to do with performing intermediate calculations accurately. – Eric Postpischil Nov 22 '13 at 17:34
  • @EricPostpischil: Consider a company which buys components in automatically-dispensable lots of 5,000 and frequently resells hundreds at a time; 1000 of some kinds of part would cost $12.16, but since the equipment can count out 197 units just as easily as 200, parts are priced individually; nonetheless, company policy states that every line item will if necessary have a surcharge imposed to make the cost be a multiple of $0.01. – supercat Nov 22 '13 at 17:43
  • If that is policy, someone who buys eight of item would pay $0.10, someone who buys nine, $0.11, and someone who buys ten would pay $0.13. Someone who buys eight of one thing and ten of another would pay $0.23, while someone who bought nine of each would pay a penny less, even though both items were the same price. Even if it would seem the orders should have totaled the same, if the policy is as stated the different totals would both be precisely correct. What's important for accurate math is to specify exactly how things should be computed. – supercat Nov 22 '13 at 17:46
  • @supercat: I am not interested in the fact that there are other ways to calculate these things. I am concerned about advising people that they can scale their numbers and everything will be fine. It will not. That is the key point: The arithmetic system is not free of error just because input values are scaled to integers. Programmers who use arithmetic systems **must** understand the system(s) they use and must design for it; they cannot scale to integers and ignore the arithmetic. – Eric Postpischil Nov 22 '13 at 17:50
  • @EricPostpischil: Code which doesn't match business rules will be wrong, regardless of whether it uses longs, doubles, some big rational number type, or anything else. No matter what one does, there are bound to be little fractions of pennies which need to be allocated somewhere, and the only way they'll be handled correctly will be if one explicitly writes the code to match the business rules. – supercat Nov 22 '13 at 18:06
  • @supercat: That is a non sequitur. I am not saying code should be written without regard to business rules. Set whatever specifications you want for the results the code should produce; that is fine. The point is that the advice to scale numbers so that arithmetic will “work” is not a useful way to accomplish the goal of making software work. It is bad advice. In general, it does not help you conform to business rules, it does not help you get right results. It only “works” in limited situations where you are doing simple addition, subtraction, and some multiplication, within limits. – Eric Postpischil Nov 22 '13 at 18:22
2

Historically, it was often reasonable to use floating-point types for precise calculations on whole numbers which could get bigger than 2^32, but not bigger than 2^52 [or, on machines with a proper "long double" type, 2^64]. Dividing a 52-bit number by a 32-bit number to yield a 20-bit quotient would require a rather lengthy drawn-out process on the 8088, but the 8087 processor can do it comparatively quickly and easily. Using decimals for financial calculations would have been perfectly reasonable, if all values that needed to be precise were always represented by whole numbers.

Nowadays, computers are much more able to handle larger integer values efficiently, and as a consequence it generally makes more sense to use integers to handle quantities which are going to be represented by whole numbers. Floating-point may seem convenient for things like fractional division, but correct code will have to deal with the effects of rounding things to whole numbers no matter what it does. If three people need to pay for something that costs $100.00, one can't achieve penny-accurate accounting by having everyone pay $33.333333333333; the only way to make things balance will be to have the people pay unequal amounts.

supercat
  • 77,689
  • 9
  • 166
  • 211
1

If the size of BigDecimal is too large for your cache, than you should convert amounts to long values when they are written to the cache and convert them back to BigDecimal when they are read. This will give you a smaller memory footprint for your cache and will have accurate calculations in your application.

Even if you are able to represent your inputs to calculations correctly with doubles, that doesn't mean that you will always get accurate results. You can still suffer from cancellation and other things.

If you refuse to use BigDecimal for your application logic, than you will rewrite lots of functionality that BigDecimal already provides.

SpaceTrucker
  • 13,377
  • 6
  • 60
  • 99
  • How do you loose no precision when converting BigDecimals to long and vice versa? – LionC Nov 14 '13 at 13:17
  • @LionC you are able to represent `LONG.MIN_VALUE` cents to `Long.MAX_VALUE` cents amounts with a single `long`. If all amounts the application will ever store in the cache are in that interval than this works. – SpaceTrucker Nov 14 '13 at 13:20
  • But youre not able to store half a cent and so on which considering interest rate tax and so on is an issue in financial software – LionC Nov 14 '13 at 13:25
  • Actually @SpaceTrucker that's an excellent point and I think I should have done that ( a long in the implementation and a getter that return a BigDecimal). – tibo Nov 14 '13 at 13:28
0

I am going to answer at question by addressing a different part of the problem. Please accept that I am trying to address the root problem not the state question to the letter. Have you looked at all of the options for reducing memory?

  1. For example, how are you caching?
  2. Are you using a Fly Weight pattern to reduce storage of duplicate numbers?
  3. Have you considered representing common numbers in a certain way?
    Example zero is a constant, ZERO.
  4. How about some sort of digit range compression, or hierarchy of digits, for example a hash map by major digits? Store a 32 bit within flag or multiple of some kind
  5. Hints at a cool difference approach, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.65.2643
  6. Is your run of the mill cache doing something less efficient?
  7. Pointers are not free, thought about array groups? depending on your problem.
  8. Are you storing objects in the cache as well, they are not small, you can serialize them to structs etc, as well.

Look at the storage problem and stop looking to avoid a potential math issue. Typically there is a lot of excess in Java before you have to worry about digits. Even some you can work around them with the ideas above.

Ted Johnson
  • 4,315
  • 3
  • 29
  • 31
  • Thanks for your answer Ted. Wasn't expecting that but that's very clever. Actually yes on our cache we have different kind of optimization, including compression and using a pool of shared value. We are storing very few object and for example we use Trove for having collection of primitive. To be honest, now we are not looking at it anymore since we now have reached what we needed. – tibo Nov 14 '13 at 13:57
  • Thanks for taking it in the spirit in which it was offered. Cheers. – Ted Johnson Nov 15 '13 at 07:24
-1

You cannot trust doubles in financial software. They may work great in simple cases, but due to rounding, inaccuracy in presenting certain values etc. you will run into problems.

You have no choice but to use BigDecimal. Otherwise you're saying "I'm writing financial code which almost works. You'll barely notice any discrepancies." and that's not something that'll make you look trustworthy.

Fixed point works in certain cases, but can you be sure that 1 cent accuracy is enough now and in the future?

Kayaman
  • 72,141
  • 5
  • 83
  • 121
  • In my case inaccuracy is acceptable but that's not the point. I am trying to find out the limit of double and at what point in time this inaccuracy becomes visible – tibo Nov 14 '13 at 13:12
  • 3
    @tibo The answer is: it depends. You can't rely on double being accurate to a certain point. If you're doing a lot of multiplication and division, eventually you'll come up with wrong numbers. Then you'll have to hope your customers don't notice it. – Kayaman Nov 14 '13 at 13:19
  • 1
    I think you missed my point. I know that you have to deal with inaccuracy. That the same for BigDecimal, e.g. you can't represent 1/3 and you will also come up with wrong number as you said. Everything is about handling inaccuracy. BigDecimal have MathContext while doubles have a default mechanism. I am really annoyed to see everyone thinking that BigDecimal is the silver bullet without going deeper. BigDecimal are great but there are different solution for different problem – tibo Nov 15 '13 at 02:46
  • @tibo So far you haven't presented any different solutions. You've said "I don't want to use BigDecimal, but I want the features it offers". The difference between inaccuracy in BigDecimal and double, is that in BigDecimal **you** decide when to *realize* the inaccuracy (i.e. when the computations are complete), whereas in double you never know if the accuracy has already gone to hell and you're just making it worse. – Kayaman Nov 15 '13 at 06:49
  • Besides, we know nothing of your solution. You could have a completely idiotic implementation in there, and instead of addressing the root cause, you're blaming BigDecimal. If you're making professional financial software, I can't believe you'd be running out of memory because of BigDecimals! – Kayaman Nov 15 '13 at 06:52
-1

I hope you have read Joshua Bloch Java Puzzlers Traps Pitfalls. This is what he has said in the puzzle 2: Time for a change.

Binary floating-point is particularly ill-suited to monetary calculations, as it is impossible to represent 0.1— or any other negative power of 10— exactly as a finite-length binary fraction [EJ Item 31].

Abhijith Nagarajan
  • 3,865
  • 18
  • 23
  • I like this explanation. It shows how it's not just about how many digits to represent but also that some rational decimal numbers are not expressible as any finite-length binary fraction – Ron Nov 14 '13 at 13:10
  • I know that and I have read this answer a lot of times... I am trying to go beyond that – tibo Nov 14 '13 at 13:26