Using this simple program:
#include <stdint.h>
#include <stdio.h>
#include <math.h>
#include <string.h>
int main(int argc, char* argv[]) {
double a = 10.333300;
double b = 10.333300000000000;
double c = nextafter(a,a+0.5);
double d = nextafter(b,a+0.5);
printf("[%012.12f]\n[%012.12f]\n[%012.12f]\n[%012.12f]\n", a, b, c, d);
return 0;
}
The printed output is as expected:
$ ./nextafter
[10.333300000000]
[10.333300000000]
[10.333300000000]
[10.333300000000]
However, the true values are not precisely these:
(gdb) p a
$1 = 10.333299999999999
(gdb) p b
$2 = 10.333299999999999
(gdb) p c
$3 = 10.333300000000001
(gdb) p d
$4 = 10.333300000000001
Now, it seems this is due to the way floats and doubles are represented, and i'm using a 64-bit x86 VM with an old RHEL. Using nextafter
does push the problem as furthest as possible, but it raises some questions:
Couldn't 0.3333 be "padded" with zeroes to the right until it formed a valid binary number (in this case 0.333300000000001 i guess)? I.e. could nextafter
be used by default?
Is there a way to set double default precision (to 10 or 11 in this case)?
Are there architectures that can precisely represent floating point numbers?
Or, rather, why is it that 10.3333 is right away rounded to 10.3332(9) instead of rounding it to the nextafter
version?