When I run the following code:
public static void main(String[] args) {
float number = 10.0f/6.0f;
System.out.println(number);
}
My output is:
1.6666666
This sort of makes sense to me - float has seven digits of accuracy. Not sure why the integer portion doesn't count as one of those digits, or why the last digit isn't a 7, but okay. Similarly, when i switch to double:
public static void main(String[] args) {
double number = 10.0/6.0;
System.out.println(number);
}
My output is:
1.6666666666666667
This also sort of makes sense. Double has greater accuracy. 15 digits according to my textbook. I've got a 16 digit mantissa and this time it rounds up to 7 at the end. Not sure why the deviation in the number of digits, and maybe the 7 at the end where we had a 6 at the end of the float is due to the fact that the original calculation is happening in binary and being converted to decimal.
What I don't understand though is what happens when I do this:
public static void main(String[] args) {
double number = 10.0f/6.0f;
System.out.println(number);
}
My output is:
1.6666666269302368
My understanding is that the division results in a float that is implicitly converted to a double when it is assigned to the variable, but where do all the extra digits come from? I expected zeros. I thought that it might be that the java compiler treats the situation specially because it sees that the mathematical expression is part of the declaration and initialization of a double type variable, but the following code has the same output:
public static void main(String[] args) {
float number1 = 10.0f/6.0f;
double number2 = number1;
System.out.println(number2);
}
Again:
1.6666666269302368