Things I know from other SO questions:
- I must use either ints or Java's BigDecimal (Java is my case, though the question is broader), not floating point types.
- I must store at a minimum one more decimal place than what I want to show. In my case I want to show 2 decimal places. The convention is to use decimal(19,4) in the database (in my case it's Postgres, but again this question is broader)
When I have in memory a bill with 101 items each costing $1.00004999999999999 my calculated total from adding all BigDecimals of value $1.00004999999999999 (or all integers if I were to choose that representation) would be $101.005049999999999 which is rounded to $101.01.
If I store 4 decimal places in the DB, each item will be rounded and stored as $1.0000, and when I pull them from database and add them up I get $101.0000, rounded to $101.00.
If I increase the storage precision to decimal(19,5) I need 1001 items instead of 101 for that to happen, but it is still possible. It will always be something that can happen, though one possible solution is to increase storage precision so much that this error is less likely to happen than a lightning striking the computer.
So far my solution is to never use BigDecimal, but instead always use BigDecimal.setScale(4), which sets it to 4 decimal places. Would you suggest a different approach?
Things that I know won't work:
- Calculating the total after storage. I need to calculate it with JavaScript while the user is inputing the data.
- Never doing calculations with rounded values. I don't know of a way to store them without rounding them
- Never rounding up, always down. If I have a tax value of 1.005049999 it must be shown as 1.01. It's a business rule. I could challenge it, but there must be a way that doesn't impact the way the business works.
Other considerations:
- I don't care much about execution speed or memory usage.