I will keep the accepted answer as accepted.
But I will ad this answer to address the question and the other part of the question about the 839.21
There appears to be a bug in printf of that cygwin gcc implementation. Though R thought it was a ming thing, and there is a case a similar printf bug with mingw which deeper, is a bug with the Windows C/C++ runtime files. Though Chux has found that his cygwin implementation is fine, his cygwin implementation is more current.
I have found an online C compiler which doesn't have the bug so it illustrates things well. https://www.tutorialspoint.com/compile_c_online.php
One of the things my question asked was why 7.21 was displaying accurately and 839.21 was displaying inaccurately, given that they have the same fraction after the decimal point.
printf("%.100f\n", 7.21);
printf("%.100f\n", 839.21);
It's worth looking at a printf that doesn't have the bug.
printf("%f\n",7.21);
// 7.210000
printf("%f\n",839.21);
// 839.210000
We can see how they are stored in double
printf("%.60f\n",7.21);
7.209999999999999964472863211994990706443786621093750000000000
printf("%.20a\n",7.21);
0x1.cd70a3d70a3d70000000p+2
printf("%.60f\n",839.21);
839.210000000000036379788070917129516601562500000000000000000000
printf("%.20a\n",839.21);
0x1.a39ae147ae1480000000p+9
Notice that the fraction that is stored in binary, i.e. the number after the binary point, is very different, is very different for 7.21 than for 839.21 This is because in scientific notation, there is only one digit before the point. So even if it were just decimal numbers in scientific notation, the fraction part would be different. 7.21 and 8.3921 If you look at 839 in binary it is 1101000111 So you can see the 1.A there, as the first bit is 1, then for the following nibble you have 1010 which is A. Different to the case of 7.21
So that's why one shouldn't expect the same result in terms of accuracy, storing 7.21 as for 839.21
Chux has mentioned there is an issue in my implementation involving an internal conversion to float in the %a case. So it's worth looking at how they are stored in float, by doing float f;
and passing to printf with a format specifier of %.60f and %.20a still using the mentioned working implementation of the printf.
printf("%.60f\n",f); //float f=7.21
7.210000038146972656250000000000000000000000000000000000000000
printf("%.20a\n",f); //float f=7.21
0x1.cd70a400000000000000p+2 <-- 7.21 in float, in hex.
printf("%.60f\n",f); //float f=839.21
839.210021972656250000000000000000000000000000000000000000000000
0x1.a39ae200000000000000p+9 <-- 839.21 in float, in hex
Now if we look at my question, Chux noticed that there is a bug that it converts the number to float in the %a case.
So for example in my early(not latest) cygwin implementation, which is what my question asks about and where a bug is, double x=7.21; printf("%a\n", x);
was printing the hex but it was showing the hex of a number stored as float. 0x1.cd70a4p+2
There may be more to the bug, because the %f case may be a bit different, but anyhow, definitely a printf bug, in the early cygwin implementation gcc.exe (rubenvb-4.5.4) 4.5.4
Copyright (C) 2010 Free Software Foundation, Inc. (Whether that affects an early mingw one too I don't know).
There is a related printf bug here with mingw's printf, which is down to the C/C++ runtime files Long double is printed incorrectly with iostreams on MinGW But my issue is resolved in the latest cygwin gcc that chux uses. (I will try the latest cygwin to double check that).