After a request for clarification, the question is about IEEE 754, independently of a programming language. In this context, obtaining the result 2.4196151872870495e-72
for the division being considered, in “round-to-nearest”, is purely and simply incorrect. The correct result is 2.41961518728705e-72
, according to the definition found in the question:
[...] every operation [...] shall be performed as if it first produced an intermediate result correct to infinite precision and with unbounded range, and then rounded that result [...].
What happened in practice is that most programming language implementations, and often specifications, do not put a lot of emphasis on the strict respect of IEEE 754 semantics for floating-point operations. Even when the IEEE 754 double-precision representation is used for storage of floating-point values, operations can end up being implemented as:
if the arguments aren't already 80-bit floating-point values with 64-bit significands, conversion from double-precision to this format. This does not lose precision and would not be a problem in itself
computation of a 80-bit result from the 80-bit operands, because this is what is easy without extra effort when computing with the 8087 instruction set
just after that or later, conversion (in other words, rounding) of the 80-bit value with its 64-bit significand to a double-precision value with a 53-bit significand.
In some cases the last step does not take place immediately but at the whim of the compiler. This is particularly annoying because it makes the code non-deterministic. Adding separate debugging code that should not affect computations does change them by changing the availability of 80-bit registers and causing some of them to be spilt and rounded to double-precision.
Even when storage in double-precision happens immediately for each intermediate result, there remains the issue that the result has been computed, and correctly rounded, for a significand of 64 bits, and then rounded again to 53 bits. In some cases, the mathematical result is close to the midpoint between two double-precision value, and rounding it to 64 bits of significand drags it to the exact middle. If this result with its 64-bit significand is then rounded to 53 bits, the end result is a different value than the direct application of the IEEE 754 rule would have produced. This only happens when the mathematical result is very close to the midpoint between two double-precision numbers, so that the two answers are both almost equally accurate answers, but one of them is what the IEEE 754 standard says and not the other.
The article The pitfalls of verifying floating-point computations makes good further
reading.
Notes:
As mentioned by Patricia in her answer, the reason that IEEE 754 specifies that +, -, *, / and √ should compute as if the mathematical result, sometimes with infinite digits, had been computed and then rounded, is that algorithms exist to obtain this result without computing the entire mathematical result. When no algorithms are known to obtain this “correctly rounded” result cheaply, for instance for trigonometric functions, the standard does not mandate it.
Since you found a solution on a page that explains how to configure the 387 FPU to round directly at 53 bits of significand, I should point out that double-rounding problems can remain even after this configuration, although much rarer. Indeed, while the significand of the FPU can be limited to 53 bits, there is no equivalent way to limit the exponent. A double-precision operation that produces a subnormal result will tend to be double-rounded when computed on the 387 even in 53-bit-significand mode. This caused me to ask this question about how Java implementations implement multiplication on the 387.