This question is for standards gurus.
What does a typical C++ compiler do that Java doesn't or vice-versa when interpreting floating point values.
Now I know the basics of how floating point numbers are stored. I know that the computer cannot exactly represent those numbers that are not a power of 2.
However, it occurred to me that C++ somehow manages to correct for that.For example the expression 0.15 + 0.15 yields 0.3 when compiled in C++ (gcc) as opposed to 0.30000000000000004 in Java So my question is two-fold:
If the number is actually represented internally as 0.30000000000000004, what does a C++ compiler do to correct for it? Does it simply reduce the precision? Does the correction happen only when the number is evaluated? Is there an overhead? Or is it actually stored as 0.3 somehow?
What was the rationale behind the design decision that makes Java not correct for it. This makes using floating point primitives a real pain in Java (Yes I am aware of BigDecimal etc.)? Is it faster this way? Is it more correct?
If there is a benefit behind the Java-way of doing things I would be glad to hear it.
I really would like to hear both sides in this. This is for programming language design research.