I am trying to generate a random number between -10 and 10 with step 0.3 (though I want to have these be arbitrary values) and am having issues with double precision floating point accuracy. Float.h's DBL_DIG
is meant to be the minimum accuracy at which no rounding error occurs [EDIT: This is false, see Eric Postpischil's comment for a true definition of DBL_DIG
], yet when printing to this many digits, I still see rounding error.
#include <stdio.h>
#include <float.h>
#include <stdlib.h>
int main()
{
for (;;)
{
printf("%.*g\n", DBL_DIG, -10 + (rand() % (unsigned long)(20 / 0.3)) * 0.3);
}
}
When I run this, I get this output:
8.3
-7
1.7
-6.1
-3.1
1.1
-3.4
-8.2
-9.1
-9.7
-7.6
-7.9
1.4
-2.5
-1.3
-8.8
2.6
6.2
3.8
-3.4
9.5
-7.6
-1.9
-0.0999999999999996
-2.2
5
3.2
2.9
-2.5
2.9
9.5
-4.6
6.2
0.799999999999999
-1.3
-7.3
-7.9
Of course, a simple solution would be to just #define DBL_DIG 14
but I feel that is wasting accuracy. Why is this happening and how do I prevent this happening? This is not a duplicate of Is floating point math broken? since I am asking about DBL_DIG
, and how to find the minimum accuracy at which no error occurs.