10

I am porting some program from Matlab to C++ for efficiency. It is important for the output of both programs to be exactly the same (**).

I am facing different results for this operation:

std::sin(0.497418836818383950)   = 0.477158760259608410 (C++)
sin(0.497418836818383950)        = 0.47715876025960846000 (Matlab)
N[Sin[0.497418836818383950], 20] = 0.477158760259608433 (Mathematica)

So, as far as I know both C++ and Matlab are using IEEE754 defined double arithmetic. I think I have read somewhere that IEEE754 allows differents results in the last bit. Using mathematica to decide, seems like C++ is more close to the result. How can I force Matlab to compute the sin with precision to the last bit included, so that the results are the same?

In my program this behaviour leads to big errors because the numerical differential equation solver keeps increasing this error in the last bit. However I am not sure that C++ ported version is correct. I am guessing that even if the IEEE754 allows the last bit to be different, somehow guarantees that this error does not get bigger when using the result in more IEEE754 defined double operations (because otherwise, two different programs correct according to the IEEE754 standard could produce completely different outputs). So the other question is Am I right about this?

I would like get an answer to both bolded questions. Edit: The first question is being quite controversial, but is the less important, can someone comment about the second one?

Note: This is not an error in the printing, just in case you want to check, this is how I obtained these results:

https://i.stack.imgur.com/XXH7g.png

Note (**): What I mean by this is that the final output, which are the results of some calculations showing some real numbers with 4 decimal places, need to be exactly the same. The error I talk about in the question gets bigger (because of more operations, each of one is different in Matlab and in C++) so the final differences are huge) (If you are curious enough to see how the difference start getting bigger, here is the full output [link soon], but this has nothing to do with the question)

José D.
  • 4,175
  • 7
  • 28
  • 47
  • check out this link : http://stackoverflow.com/questions/19808261/why-is-there-significant-double-precision-difference-between-matlab-and-mathemat hope it will help you. ;) – Ankush May 29 '15 at 12:50
  • 1
    @Ankush that is about reading and writing from a file and the problems translating doubles to decimal base. This is a completely different problem (the result itself from `sin`evaluation is different, is not an error when converting to decimal base in the printing). – José D. May 29 '15 at 12:53
  • 2
    IEEE-754 `double`s have a precision of only [15–17 significant decimal digits](http://en.wikipedia.org/wiki/Double-precision_floating-point_format#IEEE_754_double-precision_binary_floating-point_format:_binary64). – Paul R May 29 '15 at 13:01
  • @PaulR I don't claim the oppposite, but I ask about why the same operation produces different output in two different technologies, both using the IEEE 754 standard (and how to fix it) – José D. May 29 '15 at 13:02
  • 1
    Since they match to within the given precision limits then I'm not sure you can expect anything more than this ? – Paul R May 29 '15 at 13:04
  • @PaulR My point is: even if the decimal base output is different, the result itself of the operation should be the same, right? These are not, the value is different, the memory, those 64 bits should be the same, but I have checked that the last bit of those 64 bits is different. That is what I want to fix. – José D. May 29 '15 at 13:07
  • 1
    Seems to me that C++'s result is closer to the Mathematica output than Matlab's....(33 - 10 is 23; 60 - 33 is 27) – T.C. May 29 '15 at 13:11
  • @T.C.you are right, corrected. – José D. May 29 '15 at 13:13
  • 2
    First, IEEE-754 has *nothing* to say about how `sin` is calculated. Second, you should not be surprised that different languages have small differences for functions like `sin`. Third, if your code is so sensitive to a "a single bit" difference," perhaps you should look at your code. – horchler May 29 '15 at 13:21
  • The Mathematica and C++ results both have the same bit representation at 64 bit double precision (`0x3FDE89C4E59427B1`), so it's purely a display issue here. MATLAB is off in the last bit though (`0x3FDE89C4E59427B2`). – Paul R May 29 '15 at 13:22
  • 5
    @Trollkemada `It is important for the output of both programs to be exactly the same` You will be waiting for Godot. It is not guaranteed to even get C++ programs written with a different compiler to behave exactly the same wrt floating point. – PaulMcKenzie May 29 '15 at 13:22
  • 2
    Maybe its not about the number format maybe its about the sine algorithm? – Galik May 29 '15 at 13:23
  • 6
    Re *It is important for the output of both programs to be exactly the same.* To the contrary, it is important that you should tolerate some minimal differences. That you expect an exact match and don't get it is a problem with you, not with C++, Matlab, or Mathematica. You need to change your expectations. It's best to never expect exact matches of floating point numbers across platforms, products, or even different compilations of the exact same product. – David Hammen May 29 '15 at 13:26
  • @DavidHammen and others about the sentence 'It is important for the output of both programs to be exactly the same' It has been missinterpreted (or misswrote by me) check the edited question, please. – José D. May 29 '15 at 13:33
  • 1
    possible duplicate of [matlab and c differ with cos function](http://stackoverflow.com/questions/7622012/matlab-and-c-differ-with-cos-function) – horchler May 29 '15 at 13:52
  • 6
    Based on "In my program this behaviour leads to big errors because the numerical differential equation solver keeps increasing this error in the last bit." you have a **much** bigger problem than the last bit of the sine result. Remember the original input data probably came from measurements that were made with a lot less precision than one part in 10^15. – Patricia Shanahan May 29 '15 at 13:54
  • Please do not use images to show, code add small snippets to shows how you display the result. – Shafik Yaghmour May 29 '15 at 13:58
  • 1
    @horchler: IEEE 754-2008 recommends correct rounding of `sin`. – tmyklebu May 29 '15 at 14:00
  • @PatriciaShanahan Thanks, the second question was oriented to know wether there is in fact something wrong with my code, or this last bit error could be the cause of the error. Seems like is the first option. (This is what I really wanted to solve) – José D. May 29 '15 at 14:00
  • @ShafikYaghmour the code is both in a small snippets and in a image. – José D. May 29 '15 at 14:01
  • 3
    You need to consider the possibility that you are trying to solve a chaotic system of equations. Tiny differences rapidly becoming significant are typical of chaos. – Patricia Shanahan May 29 '15 at 14:21
  • 1
    @tmyklebu: IEEE-754 recommends all sorts of things that realistically are not going to be widely implemented in the foreseeable future (correctly rounded reductions, alternate exception handling, ...) Recommendations are more of a statement of direction than anything else. – Stephen Canon May 29 '15 at 16:26
  • 1
    @StephenCanon: Sure. "Nothing to say" isn't an accurate portrayal of the situation. – tmyklebu May 29 '15 at 17:10

2 Answers2

5

Firstly, if your numerical method depends on the accuracy of sin to the last bit, then you probably need to use an arbitrary precision library, such as MPFR.

The IEEE754 2008 standard doesn't require that the functions be correctly rounded (it does "recommend" it though). Some C libms do provide correctly rounded trigonometric functions: I believe that the glibc libm does (typically used on most linux distributions), as does CRlibm. Most other modern libms will provide trig functions that are within 1 ulp (i.e. one of the two floating point values either side of the true value), often termed faithfully rounded, which is much quicker to compute.

None of those values you printed could actually arise as IEEE 64bit floating point values (even if rounded): the 3 nearest (printed to full precision) are:

0.477158760259608 405451814405751065351068973541259765625

0.477158760259608 46096296563700889237225055694580078125

0.477158760259608 516474116868266719393432140350341796875

The possible values you could want are:

  1. The exact sin of the decimal .497418836818383950, which is

0.477158760259608 433132061388630377105954125778369485736356219...

(this appears to be what Mathematica gives).

  1. The exact sin of the 64-bit float nearest .497418836818383950:

0.477158760259608 430531153841011107415427334794384396325832953...

In both cases, the first of the above list is the nearest (though only barely in the case of 1).

Simon Byrne
  • 7,694
  • 1
  • 26
  • 50
1

The sine of the double constant you wrote is about 0x1.e89c4e59427b173a8753edbcb95p-2, whose nearest double is 0x1.e89c4e59427b1p-2. To 20 decimal places, the two closest doubles are 0.47715876025960840545 and 0.47715876025960846096.

Perhaps Matlab is displaying a truncated value? (EDIT: I now see that the fourth-last digit is a 6, not a 0. Matlab is giving you a result that's still faithfully-rounded, but it's the farther of the two closest doubles to the desired result. And it's still printing out the wrong number.

I should also point out that Mathematica is probably trying to solve a different problem---compute the sine of the decimal number 0.497418836818383950 to 20 decimal places. You should not expect this to match either the C++ code's result or Matlab's result.

tmyklebu
  • 13,915
  • 3
  • 28
  • 57
  • 2
    The two double constant outputed from Matlab and C++ and the closest representables real values to the real result (check http://i.imgur.com/ZSUyPmM.png). (The computing is not being done to 20 decimal places, but with IEEE 754 standard, as noted in the question) – José D. May 29 '15 at 13:01
  • @Trollkemada: No, the last few digits of the Matlab result are bogus. No `double` has a decimal expansion that starts like that. – tmyklebu May 29 '15 at 13:24