The default floating-point formatting behavior in some Python implementations is now to produce just as many digits as needed to uniquely distinguish the number represented. In the floating-point format most commonly used, 123456789.123456789 is not exactly representable. When this numeral is encountered in source text, it is converted to the nearest representable value, 123456789.12345679104328155517578125.
Then, when that is formatted as a string, “123456789.12345679” is printed because fewer digits fail to distinguish it from nearby representable values, like 123456789.123456776142120361328125, and more digits are not necessary, as the “…79” suffices to distinguish it from its two neighbors “…77…” and “…80…”.