DBL_MAX
is
179769313486231570814527423731704356798070567525844996598917476803157260
780028538760589558632766878171540458953514382464234321326889464182768467
546703537516986049910576551282076245490090389328944075868508455133942304
583236903222948165808559332123348274797826204144723168738177180919299881
250404026184124858368.000000
But if I do:
double a=1234567890123456789.0;
printf("%f",a);
1234567890123456768.000000
Here the precision is 17 digits.
double a=0.1234567890123456789;
printf("%.20f",a);
0.1234567890123456773
Here also the precision after the floating point is 17 digits:
double a=1234567890.1234567890123456789;
printf("%.20f",a);
That will generate:
1234567890.12345671653747558594
Now, the precision will be 10 digits for decimal + 7 after the floating point which makes 17.
Does that mean that I have only 17 digits to get a precise value for double? If yes, why is the number of digits of DBL_MAX over 300 digits?