0

I know that the consensus is "money = decimal". I understand all the technical fine point of the argument, and agree with that.

Where I still have issue is that it's seems to assume that all finance applications are simple ledger...

When you start to play with ratio, interpolation (cubic spline for example) or if you move to some Stochastic model, differentiate, integral... it seems that the decimal become meaningless.

So my question is where is the border (mathematical complexity?) when using decimal become more a burden that an advantage?

EDIT: most of complex financial product use statistical model in order to be priced. In the end the price is a $$, and as Flydog57 point out, most (if not all) mathematical library used double as input/output.

So unless I redeveloped all these mathematical functions (that has been develops by people way smarter than me) I need to constantly cast decimal to double and conversely. That's where the problem is.

Is it useful to use decimal "because it's money" if in the end I constantly need to cast it in double to be able to create a price?

That where the argument saying (money = decimal) may be a simplistic view of a real world problem. I know the question seems to be duplicate but all the other answer out there don't deal with this specify case where a price (money) is actually created from a statistical model

Guillaume
  • 1,176
  • 2
  • 11
  • 27
  • 3
    Does this answer your question? [decimal vs double! - Which one should I use and when?](https://stackoverflow.com/questions/1165761/decimal-vs-double-which-one-should-i-use-and-when) – Rahatur Sep 15 '21 at 16:47
  • `it seems that the decimal become meaningless.` on the contrary, it remains just as important. `Decimal` is j"just" a far larger double. And `double` doesn't exist in math. Finance and accounting use *simple* math. In finance and accounting there are specific rules that govern precision in all calculations and how fractional digits and remainders should be allocated to reduce bottom-line errors. It's statistics and engineering that actually use stochastic models. `double` vs `decimal` is purely a computing artifact, with a `double` fitting inside a CPU register – Panagiotis Kanavos Sep 15 '21 at 16:49
  • If anything, since `double` results in greater errors than `decimal`, you should prefer `decimal` for more complex algorithms, especially where the error could cause instability. `double` is used for speed, when you can afford (or model) the error. – Panagiotis Kanavos Sep 15 '21 at 16:51
  • 1
    On the other hand, if you are doing financial modeling. And the model includes serious math, many of those math functions exist only as double. I helped someone port a "basket of mortgages" Monte Carlo simulation from VB6 to .NET back in the day. I measured the speedup at each step. Moving from decimal to double made it noticeably faster. Since decimal-level precision was lost during the calculations, `double` made much more sense – Flydog57 Sep 15 '21 at 17:05
  • DECIMAL! you dont want floating point errors when dealing with $$ – Ctznkane525 Sep 15 '21 at 17:29
  • 2
    @ctznkane525: there's a big difference between accounting (or what the OP calls _Simple Ledger_) and what he's asking about. If your modeling involves an exponent (compounding interest is intrinsically an exponential operation), then you are going to use a double, and all that extra precision will fall away never to be found again. Stochastic models are more complex. Using decimal in these cases does nothing but slow things down – Flydog57 Sep 15 '21 at 18:20
  • 1
    Just use always `decimal` when it's the value itself and its decimal representation that actually matters (for financial applications that is the case). It's that simple. If the value after all means a color, a coordinate in the space, the direction/volume of a sound, etc, then use `float` (or `double`, if really needed, though render engines usually just use `float`). Yes, including [interpolation](https://github.com/koszeggy/KGySoft.Drawing/blob/a5041f56384c325a15c800dd0cffab7c8b423237/KGySoft.Drawing/Drawing/Imaging/_Extensions/BitmapDataExtensions.ResizingSession.cs#L431) – György Kőszeg Sep 15 '21 at 19:39
  • Something to watch for if you are storing the data in a database. For millions of rows in databases the non significant fractions of a decimal become significant. So in a database (decimal 15,5) with millions of records you might find a summing error that totals tens of dollars even though individual records show no errors. We encountered this in accounting packages and ended up splitting decimals into two ints for numerator and denominator - this is how many banking mainframes work too. – Jon P Sep 16 '21 at 03:51
  • Never use double for currency, because `round(sum(...)) != sum(round(...))` will trip you up eventually. Do you need to support other currencies? what about fractional prices? Do you know how much precision you will need up front? Maybe you need two columns, an integer (or long...) and a scale. – Jeremy Lakeman Sep 17 '21 at 05:51
  • 2
    To all the people preaching `decimal` even when you *can't* use it because the existing math libraries simply don't deal in it for advanced calculations: keep in mind that Excel is used by financial analysts all over the world who manage to live with the fact that Excel *only uses doubles*. That's right, everyone using a spreadsheet is plainly violating the dogma that you should never, ever use that nasty binary floating point for money. Use `decimal` for base data and storage when you the opportunity is there; `double` when you have to, and keep an eye on rounding at the right time. – Jeroen Mostert Sep 17 '21 at 06:43
  • @Jeroen Mostert and flydog57, both of you have valid answer to the question. You are basically saying use decimal for money when you can but there is a all range of exception to this rule. It's more a balance view than saying money=decimal – Guillaume Sep 17 '21 at 07:34

0 Answers0