From the documentation it follows that the most compact form to represent the number will be chosen.
I.e., when you do not specify a format string, the default is the "G" format string. From the specification of the G format string follows:
Result: The most compact of either fixed-point or scientific notation.
The default for the number of digits is 15 with the specifier. That means that a number that is representable as exactly a certain binary representation (like 0.1 in the example of harriyott) will be displayed as fixed point, unless the exponential notation is more compact.
When there are more digits, it will, by default, display all these digits (up to 15) and choose exponential notation once that is shorter.
Putting this together:
?(1.0/7.0).ToString()
"0,142857142857143" // 15 digits
?(10000000000.0/7.0).ToString()
"1428571428,57143" // 15 significant digits, E-notation not shorter
?(100000000000000000.0/7.0).ToString()
"1,42857142857143E+16" // 15 sign. digits, above range for non-E-notation (15)
?(0.001/7.0).ToString()
"0,000142857142857143" // non E-notation is shorter
?(0.0001/7.0).ToString()
"1,42857142857143E-05" // E-notation shorter
And, of interest:
?(1.0/2.0).ToString()
"0,5" // exact representation
?(1.0/5.0).ToString()
"0,2" // rounded, zeroes removed
?(1.0/2.0).ToString("G20")
"0,5" // exact representation
?(1.0/5.0).ToString("G20")
"0,20000000000000001" // unrounded
This is to show what happens behind the scene and why 0.2
is written as 0.2
, not 0,20000000000000001
, which is actually is. By default, 15 significant digits are shown. When there are more digits (and there always are, except for certain special numbers), these are rounded the normal way. After rounding, redundant zeroes are removed.
Note that a double has a precision of 15 or 16 digits, depending on the number. So, by showing 15 digits, what you see is a correctly rounded down number and always a complete representation, and the shortest representation of the double.