it's pretty clear why double & co. are not a good choice when it comes to handling currency. I'm wondering though, since the issue only arises when calculations are performed on the value, am I correct by assuming that there is no problem at all to just store a currency value in a double?
For example: 1. Value gets loaded from any given source into a double 2. Value gets modified, directly typed in by the user. 3. Value gets stored to disk in a suitable format.
In the above example the double is just a way to store the value in memory, and thus shouldn't present any of the problems that arise if calculates are performed on the value. Is this correct?
And, if correct, wouldn't it be better to use currency specific types, only when performing calculations? Instead of loading 1000 BigDecimals from a database one could load 1000 doubles. Then, when necessary, define a BigDecimal, do the calculations and just keep the resulting double in memory.