-2

Using this simple program:

#include <stdint.h>
#include <stdio.h>
#include <math.h>
#include <string.h>

int main(int argc, char* argv[])  {
    double a = 10.333300;
    double b = 10.333300000000000;
    double c = nextafter(a,a+0.5);
    double d = nextafter(b,a+0.5);
    printf("[%012.12f]\n[%012.12f]\n[%012.12f]\n[%012.12f]\n", a, b, c, d);
    return 0;
}

The printed output is as expected:

$ ./nextafter
[10.333300000000]
[10.333300000000]
[10.333300000000]
[10.333300000000]

However, the true values are not precisely these:

(gdb) p a
$1 = 10.333299999999999
(gdb) p b
$2 = 10.333299999999999
(gdb) p c
$3 = 10.333300000000001
(gdb) p d
$4 = 10.333300000000001

Now, it seems this is due to the way floats and doubles are represented, and i'm using a 64-bit x86 VM with an old RHEL. Using nextafter does push the problem as furthest as possible, but it raises some questions:

Couldn't 0.3333 be "padded" with zeroes to the right until it formed a valid binary number (in this case 0.333300000000001 i guess)? I.e. could nextafter be used by default?

Is there a way to set double default precision (to 10 or 11 in this case)?

Are there architectures that can precisely represent floating point numbers?

Or, rather, why is it that 10.3333 is right away rounded to 10.3332(9) instead of rounding it to the nextafter version?

vesperto
  • 804
  • 1
  • 6
  • 26
  • 1
    *Are there architectures that can precisely represent floating point numbers* - represent `PI` precisely, please... – Eugene Sh. Oct 31 '17 at 14:46
  • 4
    There are infinite real numbers, but finite floating point representations, so no. – stark Oct 31 '17 at 14:52
  • 2
    Note that if you want precise 4 digits after the decimal, you can scale your numbers by 10000 and use integer arithmetic. – stark Oct 31 '17 at 14:57
  • 1
    10.3332999999999994855670593096874654293060302734375 is slightly closer to 10.3333 than 10.33330000000000126192389870993793010711669921875 is – harold Oct 31 '17 at 14:57
  • Yes, there are architectures that can precisely represent floating point numbers. All of them, in fact. The problem is in expecting that converting between base 10 text to base 2 floating point and then back to base 10 text will keep all information the original base 10 text had. – Art Oct 31 '17 at 15:01
  • 1
    There are chips (IBM Power, amongst them) that have hardware decimal floating point arithmetic. The new IEEE 754 standard defines them. But most commodity chips do not have decimal floating point support. – Jonathan Leffler Oct 31 '17 at 15:02
  • Thanks for constructive input, @JonathanLeffler – vesperto Oct 31 '17 at 15:07
  • @stark yes, that's been a common workadound (and it's probably safer). – vesperto Oct 31 '17 at 15:16
  • "Couldn't 0.3333 be "padded" with zeros to the right until it formed a valid binary number" --> No. `0.3333`, `0.33330`, 0.333300`, etc. have the same value and is not exactly representable as a binary FP numbers. – chux - Reinstate Monica Oct 31 '17 at 16:13
  • "Is there a way to set double default precision" No. – chux - Reinstate Monica Oct 31 '17 at 16:14
  • "Are there architectures that can precisely represent floating point numbers?" Common FP all ready represents FP numbers precisely - in fact very precisely. OP's question appears to be nearly the same, yet different: "Are there architectures that can precisely represent _decimal_ floating point numbers"? – chux - Reinstate Monica Oct 31 '17 at 16:16

1 Answers1

0

the posted format string is using: %012.12f

This limits the amount of digits too much.

Suggest using a format string like: %020.20f then the output would be more what your looking for. I.E.

[10.33329999999999948557]
[10.33329999999999948557]
[10.33330000000000126192]
[10.33330000000000126192]
user3629249
  • 16,402
  • 1
  • 16
  • 17