3

I understand that the theory of binary numbers, so operation of double numbers is not precise. However, in java, I have no idea why "(double)65 / 100" is 0.65, which is completely correct in decimal number, other than 0.6500000000004.

double a = 5;
double b = 4.35;
int c = 65;
int d = 100;
System.out.println(a -  b); // 0.6500000000000004
System.out.println((double) c /  d); // 0.65
Federico klez Culloca
  • 26,308
  • 17
  • 56
  • 95
Nick
  • 33
  • 3
  • 3
    Well ... not all floating point calculations are imprecise. Some give the exactly correct answer. It all depends whether the numbers can be represented precisely as a 53 bit number multiplied by a 2^M where M is positive or negative. Unpicking exactly where the imprecision arises is tricky ... when you are looking at the decimal representations rather than the actual binary representations. – Stephen C Jan 27 '22 at 08:02
  • Try `10_000 * ((double) c / d)` and the error will become visible. So there is a bit of "rounding" in the text representation, especially with such small numbers. – Joop Eggen Jan 27 '22 at 08:05
  • 1
    Another way of putting it is that sometimes the rounding errors will cancel each other out when you convert floating point values from decimal to binary and back to decimal. And sometimes they won't. There is some pretty complicated maths at the root of this. (The simple approach is to assume that error may occur and allow for it.) – Stephen C Jan 27 '22 at 08:11
  • @StephenC Thank you for the support. I knew that not all floating number are imprecise, so I have also checked 0.65 in binary system. It should be 0.10 1001 1001 ..., which is not a precise one, so I got confused. – Nick Jan 27 '22 at 08:14
  • @JoopEggen I tried "10_000 * ((double) c / d)", but the result is correct. Sorry that I can not fully understand your points. – Nick Jan 27 '22 at 08:18
  • 1
    Also 0.65 has a tiny approximation error. So with a sufficiently large factor the error will pop-up. My point is that the conversion from double to String (which could be precise) will often truncate a bit, which can hide any error. – Joop Eggen Jan 27 '22 at 08:26
  • 2
    `(double) 65 / 100` is not really 0.65 - it's only printed that way. Java has some rules about how to print floating point numbers - basically it picks the shortest decimal representation (in terms of number of decimal places) that's closer to the given number than any other `double` (or `float`). This usually gives quite good results for multiplication and division, but it gives poor results in the case where you subtract two numbers that are fairly close together (such as 5 and 4.35). – Dawood ibn Kareem Jan 27 '22 at 08:49

1 Answers1

1

Java completely messes up has its own way of handling floating-point binary to decimal conversions.

A simple program in C (compiled with gcc) gives the result:

printf("1: %.20f\n", 5.0 - 4.35);         // 0.65000000000000035527
printf("2: %.20f\n", 65./100);            // 0.65000000000000002220

while Java gives the result (note you only needed 17 digits to see it, but I'm trying to make it more clear):

System.out.printf("%.20f\n", 5.0 - 4.35); // 0.65000000000000040000
System.out.printf("%.20f\n", 65./100);    // 0.65000000000000000000

But when using the %a format specifier, both languages printf the underlying hexadecimal (correct) value: 0x1.4ccccccccccd00000000p-1.

So, Java is performing some illegal rounding at some point in the code. The apparent issue here is that Java has a different set of rules to convert binary to decimal, from the Java specification:

The number of digits in the result for the fractional part of m or a is equal to the precision. If the precision is not specified then the default value is 6. If the precision is less than the number of digits which would appear after the decimal point in the string returned by Float.toString(float) or Double.toString(double) respectively, then the value will be rounded using the round half up algorithm. Otherwise, zeros may be appended to reach the precision. For a canonical representation of the value, use Float.toString(float) or Double.toString(double) as appropriate. (emphasis mine)

And in the toString specification:

How many digits must be printed for the fractional part of m or a? There must be at least one digit to represent the fractional part, and beyond that as many, but only as many, more digits as are needed to uniquely distinguish the argument value from adjacent values of type double. That is, suppose that x is the exact mathematical value represented by the decimal representation produced by this method for a finite nonzero argument d. Then d must be the double value nearest to x; or if two double values are equally close to x, then d must be one of them and the least significant bit of the significand of d must be 0. (emphasis mine)

So, Java does perform a different binary to decimal conversion from C, but it remains closer to the true binary value than to any other, so the spec guarantees that the binary value can be restored back by a decimal to binary conversion.

Professor William Kahan warned about some Java floating-point issues in this article:

How Java’s Floating-Point Hurts Everyone Everywhere

But this conversion behaviour seems to be IEEE-complaint.

EDIT: I have included information provided by @MarkDickinson in the comments, to report that this Java behaviour, albeit different from C, is documented, and is IEEE-compliant. This has already been explained here, here, and here.

Arc
  • 412
  • 2
  • 16
  • Understood! Thank you so much for the useful information. – Nick Jan 27 '22 at 19:28
  • Java is computing _exactly_ the same values as C here - it's not messing anything up, or performing "illegal" roundings. The difference is purely in the way that the values are printed. Try printing the values using the `%a` format specifiers in Java and C - you should see exactly the same output for the two different languages. – Mark Dickinson Jan 29 '22 at 10:12
  • @MarkDickinson, I concede that Java arithmetic matches C arithmetic, but I'm confused, these numbers are just not round numbers when converted from binary to decimal, why would Java `printf` round them, whereas C `printf` doesn't? Why would Java developers modfiy binary to decimal conversion from the standard - and well understood - C behaviour? Is it to make floating-point look like "cleaner"? Does Java guarantees that all values converted from binary to decimal and back (with at least 17 decimal digits), will for sure provide the exact same original binary value? – Arc Jan 29 '22 at 16:21
  • So both Java _and_ C are rounding here - taking your first example, `5.0 - 4.35`, the exact value of the result is `0.6500000000000003552713678800500929355621337890625` (or `0x1.4ccccccccccd0p-1`). AFAIK, the behaviour of C isn't strictly specified here, but it looks as though your platform is choosing to round that exact value to 20 places after the point, and display the result of that rounding, while Java is rounding to 17 significant digs and then padding with zeros to get up to 20 places. Both decimal strings will round back to the correct IEEE 754 binary64 value under round-ties-to-even. – Mark Dickinson Jan 29 '22 at 16:49
  • The Java specification explicitly allows for this behaviour: see the "Float and Double" section on [this page](https://docs.oracle.com/javase/7/docs/api/java/util/Formatter.html#syntax), where it says "Otherwise, zeros may be appended to reach the precision.". – Mark Dickinson Jan 29 '22 at 16:50
  • @MarkDickinson, now I understand. I tested these with latest jdk in recent OpenSUSE running on windows WSL. But the platform does not seem to be the issue, the results are from documented Java behaviour, so I edited the answer to reflect it. Thanks for the advice. – Arc Jan 29 '22 at 20:51