A sequence of arithmetic operations (+,-,*,/,round) are performed on only monetary values of 1 trillion dollars or less (1e12 USD), rounded to the nearest penny. What is the minimum number double-precision floating point operations mirroring these operations to result in a one-penny or more rounding error on the value?
In practice, how many operations are safe to perform when computing results when rounding double-precision numbers?
This question is related to Why not use Double or Float to represent currency? but seeks a specific example of a problem with using double-precision floating point not currently found in any of the answers to that question.
Of course, double values MUST be rounded before comparisons such as ==, <, >, <=, >=, etc. And double values MUST be rounded for display. But this question asks how long you can keep double-precision values unrounded without risking a rounding error with realistic constraints on the sorts of calculations being performed.
This question is similar to the question, Add a bunch of floating-point numbers with JavaScript, what is the error bound on the sum?, but is less-constrained in that multiplication and division are allowed. Frankly, I may have constrained the question too little, because I'm really hoping for an example of a rounding error that is plausible to occur in ordinary business.
It has become clear in the extended discussion on the first answer that this question is ill-formulated because of the inclusion of "round" in the operations.
I feel the ability to occasionally round to the nearest cent is important, but I'm not sure how best to define that operation.
Similarly, I think rounding to the nearest dollar could be justified, e.g., in a tax environment where such rounding is (for who knows what reason) actually encouraged though not required in US Tax law.
Yet I find the current first answer to be dissatisfying because it feels as if cent rounding followed by banker's rounding would still produce the correct result.