Floating-point numbers are printed differently because printing is done for different purposes, so different choices are made about how to do it.
Printing a floating-point number is a conversion operation: A value encoded in an internal format is converted to a decimal numeral. However, there are choices about the details of the conversion.
(A) If you are doing precise mathematics and want to see the actual value represented by the internal format, then the conversion must be exact: It must produce a decimal numeral that has exactly the same value as the input. (Each floating-point number represents exactly one number. A floating-point number, as defined in the IEEE 754 standard, does not represent an interval.) At times, this may require producing a very large number of digits.
(B) If you do not need the exact value but do need to convert back and forth between the internal format and decimal, then you need to convert it to a decimal numeral precisely (and accurately) enough to distinguish it from any other result. That is, you must produce enough digits that the result is different from what you would get by converting numbers that are adjacent in the internal format. This may require producing a large number of digits, but not so many as to be unmanageable.
(C) If you only want to give the reader a sense of the number, and do not need to produce the exact value in order for your application to function as desired, then you only need to produce as many digits as are needed for your particular application.
Which of these should a conversion do?
Different languages have different defaults because they were developed for different purposes, or because it was not expedient during development to do all the work necessary to produce exact results, or for various other reasons.
(A) requires careful code, and some languages or implementations of them do not provide, or do not guarantee to provide, this behavior.
(B) is required by Java, I believe. However, as we saw in a recent question, it can have some unexpected behavior. (65.12
is printed as “65.12” because the latter has enough digits to distinguish it from nearby values, but 65.12-2
is printed as “63.120000000000005” because there is another floating-point value between it and 63.12, so you need the extra digits to distinguish them.)
(C) is what some languages use by default. It is, in essence, wrong, since no single value for how many digits to print can be suitable for all applications. Indeed, we have seen over decades that it fosters continuing misconceptions about floating-point, largely by concealing the true values involved. It is, however, easy to implement, and hence is attractive to some implementors. Ideally, a language should by default print the correct value of a floating-point number. If fewer digits are to be displayed, the number of digits should be selected only by the application implementor, hopefully including consideration of the appropriate number of digits to produce the desire results.
Worse, some languages, in addition to not displaying the actual value or enough digits to distinguish it, do not even guarantee that the digits produced are correct in some sense (such as being the value you would get by rounding the exact value to the number of digits shown). When programming in an implementation that does not provide a guarantee about this behavior, you are not doing engineering.