0

we know that cos(2mPI) = 1 , for every integer m. however, I get following output .

value of m = 1.000000e+01 and value of cos(2*m*pi) = 1.000000000000000
value of m = 1.000000e+02 and value of cos(2*m*pi) = 1.000000000000000
value of m = 1.000000e+03 and value of cos(2*m*pi) = 1.000000000000000
value of m = 1.000000e+04 and value of cos(2*m*pi) = 1.000000000000000
value of m = 1.000000e+05 and value of cos(2*m*pi) = 1.000000000000000
value of m = 1.000000e+06 and value of cos(2*m*pi) = 1.000000000000000
value of m = 1.000000e+07 and value of cos(2*m*pi) = 1.000000000000000
value of m = 1.000000e+08 and value of cos(2*m*pi) = 0.999999999999997
value of m = 1.000000e+09 and value of cos(2*m*pi) = 0.999999999999998
value of m = 1.000000e+10 and value of cos(2*m*pi) = 0.999999999989970
value of m = 1.000000e+11 and value of cos(2*m*pi) = 0.999999999564035
value of m = 1.000000e+12 and value of cos(2*m*pi) = 0.999999854510183
value of m = 1.000000e+13 and value of cos(2*m*pi) = 0.999985451053279
value of m = 1.000000e+14 and value of cos(2*m*pi) = 0.999742535619873
value of m = 1.000000e+15 and value of cos(2*m*pi) = 0.888410566323832
value of m = 1.000000e+16 and value of cos(2*m*pi) = 0.718430574337184
value of m = 1.000000e+17 and value of cos(2*m*pi) = -0.438105159926831
value of m = 1.000000e+18 and value of cos(2*m*pi) = 0.176561618304251
value of m = 1.000000e+19 and value of cos(2*m*pi) = -0.114036978390490
value of m = 1.000000e+20 and value of cos(2*m*pi) = 0.689416156299807

why do we not always compute the right output? as value of m become larger , the approximation change significantly.Not sure which type of float point error is causing this. Any help?

boxy
  • 139
  • 1
  • 7
  • 1
    Welcome to floating point precision. You also may want to look up how the cosine is calculated here. At a guess, it's probably an expansion of sorts to a certain precision, and larger argument values (`x`) will cause that expansion to create larger inaccuracies. –  Oct 01 '15 at 00:12
  • Related: http://stackoverflow.com/questions/2284860/how-does-c-compute-sin-and-other-math-functions –  Oct 01 '15 at 00:13
  • 3
    @Olaf: YAW. `sin` and `cos` are mathematically defined for all real numbers (indeed, for all complex numbers as well). But what you said about rounding errors is right. – TonyK Oct 01 '15 at 00:30
  • @TonyK: Ok, I was not sure if the "modulus"-part was included in the definition or not. Thanks for correcting me. As the rounding error is part of the answers, I deleted my comment. – too honest for this site Oct 01 '15 at 13:46
  • The answers already given explain why this is happening. If you are encountering this situation in a real-life scenario and need to avoid it, check whether your platform offers a function `cospi()`, where `cospi(x)` computes cos(πx). – njuffa Oct 02 '15 at 02:44

2 Answers2

4

It's probably because the value of PI itself (the computer representation of it, not the mathematical value) is not exact.

It may be 3.141592653589 (which is all I can remember off the top of my head) but, unless you have an infinite number of bits to store it (or you use a symbolic rather than binary-coded form), it will never be totally accurate.

And, as you multiply it by larger integers, the imprecision may well increase.

The vagaries of floating point representations are well known, to the point where you can only get about fifteen digits of precision from an IEEE754 double precision representation. Give PI requires about a ... well ... never-ending number of bits, something's got to give.


I'm not entirely certain what sort of application would be using values like 1020π and I don't pretend to know your situation, but you may want to give some thought to trying to clamp the values to a more "sensible" range like [0,2π).

paxdiablo
  • 854,327
  • 234
  • 1,573
  • 1,953
  • This just one of the textbook exercise I found. I think float point overflow also contribute to this imprecision. When 2*m*pi become too large to be approximate in standard computer system. – boxy Oct 01 '15 at 00:34
  • @rcgldr, careful there, I'm not *sure* (though I haven't checked exhaustively) that rounding `m / 1.0` towards zero may work in all cases. It may be no different to integer math `99 / 10` which would give `9` rather than the "more correct" `10`. – paxdiablo Oct 01 '15 at 00:48
  • @paxdiablo - deleted my prior comment. Although fmod(m, 1.0) should result in 0.0 for any integer value of m, including any m >= 10^16 in double precision, this would not work if m is not an integer value. From the prior thread, [argument reduction for huge arguments](http://www.csee.umbc.edu/~phatak/645/supl/Ng-ArgReduction.pdf) explains one approach to deal with this issue. Regarding the limits of double precision, (2^52)+1 > (2^52), but (2^53)+1 == (2^53). – rcgldr Oct 01 '15 at 02:02
3

The difference between the closest 64-bit floating point number to PI, which I will call piDouble, and the exact value of PI, piExact, is about 1.22E-16. The difference m*piExact - m*piDouble == m*(piExact - piDouble) is about m*1.22E-16.

Functions like cosine are evaluated by first reducing the angle to a relatively small range of angles over which the library has a good approximation to cosine.

As m gets bigger, m*1.22E-16 first gets big enough to matter, and then to dominate in the angle reduction result.

Patricia Shanahan
  • 25,849
  • 4
  • 38
  • 75