4

This is a bit of an odd one. I'm getting different outputs for the same input for the same bit of code at different times.

it's a very simple calculation, just getting the Radians for a given angle in degrees, in a class that handles a compass type stuff. it started off like this:

public double Radians  
{  
    get   
    {  
        return this.heading_degrees * Math.PI / 180;  
    }  
    set  
    {  
       this.heading_degrees = value * 180 / Math.PI;  
       normalize();  
    }  
}  

(heading_degrees is a member variable in the Compass class)
looks ok right?
except I was getting different results when 'getting' the Radians for a given angle.
so I dug deeper and changed the code, 'get' now looks like this:

get  
{  
    //double hd = heading_degrees;  
    double hd = 180.0;  
    //double pi = Math.PI;  
    double pi180 = 0.01745329251; //pi / 180;  
    double result = hd * pi180;  
    //double result = 3.14159265359;  
    return result;  
    //return heading_degrees * Math.PI / 180;  
}  

As you can from the commented out lines I've tried different things to try and get to the bottom of this.
setting double result = 3.14159265359; did return 3.14159265359 consistently,
however returning double result = hd * pi180; as in the above code does NOT return a consistent result. as you can see heading degrees is exactly 180.0 now just for testing and to prove that the input IS exactly the same. when I hit this code the first time, I get this result:
result = 3.1415926518
the second time through I get this:
result = 3.1415927410125732

I've tried this on two computers in an attempt to see if the problem was environmental, I've not yet been able to test it on different IDEs (currently using VS express 2012) anyone got any ideas as to why this could be happening? I'm not threading anywhere (and even if I was, how would it change the result in the current iteration of the code, with the input being set at 180.0?) one little clue I seem to have found, is that making little changes to the code (ie, using Math.PI instead of 3.3.14159... etc.) changes the result on the first time through. however the result the second time through seems to be always 3.1415927410125732

Apologies for the extremely long winded post.

other notes: Second run through is just another place in the program that is calling this function. it's not a difference between debug and release. using .net 4

More tests:

if the get code is:

get{ 
double result = 180.0d * 0.01745329251d;
return result;
}

The result is consistent. to the greater accuracy.

if the get code is:

get{
double hd = 180.0d;
double result = hd * 0.01745329251d;
return result;
}

The result is Not consistent.

if I do:

get{
double hd = 180.0d;
double result = (float)(hd * 0.01745329251d);
return result;
}  

The result is consistent, but to the lower accuracy.

note that in the above tests the variables are all local to the getter!
Also note that I only appear to be getting the inconsistency when I run the full code, is it something about how I'm storing the object that the getter belongs to that causes this?
I think I need to read Eric Lippert's reply to one of the answers again. Eric if you write those two replies as an answer I'll probably mark them as the answer. especially since the last example above is doing pretty much what you said with the cast.

and THIS looks like gold:
Fixed point math in c#?
and appears to answer the how to get out of the hole I've dug myself into.
Especially as I've found there are many, many, functions similar to the above which are giving me the exact same headache.

Community
  • 1
  • 1
se5a
  • 97
  • 10
  • Can you show the code that is calling and displaying the results? – Karl Bielefeldt Jan 02 '14 at 20:50
  • It may be best to point to the project in it's entirety. – se5a Jan 02 '14 at 20:55
  • It may be best to point to the project in it's entirety. It's a little messy currently but. project: https://bitbucket.org/ekolis/freee the above code is at line 250 of: https://bitbucket.org/ekolis/freee/src/c31fea2796d8083ee7e13de863a3106048cfc3f6/FrEee/Game/Objects/Combat2/Point3d.cs?at=default it's getting called from line 313 of:https://bitbucket.org/ekolis/freee/src/c31fea2796d8083ee7e13de863a3106048cfc3f6/FrEee/Game/Objects/Combat2/Point3d.cs?at=default (both times, the second run through is the 'replay' iteration of the code) – se5a Jan 02 '14 at 21:01
  • Are you interested in the *why* these values are different? or the *how* fix the bug they are introducing? –  Jan 02 '14 at 21:02
  • 1
    Both. I can probably truncate the result and keep enough accuracy to fix the bug that it's introducing, however the WHY is important because it could help me find other places that this might happen. – se5a Jan 02 '14 at 21:04
  • @se5a just trying to make sure you are aware of what you are asking where and the difference between StackOverflow (where the 'how' part would likely be more addressed) than Programmers.SE (where the 'why' part would likely be more addressed). As it's the why you are most after, it is likely a good fit here. –  Jan 02 '14 at 21:08
  • ah. yeah why is probably more important since it's going to address the how anyway. but looks like it's been moved anyway. – se5a Jan 02 '14 at 22:29
  • @se5a ultimately it did, it is one of the rare questions that is equally appropriate on each site. That said, you happened to have gotten the best possible definitive answer here from Eric. –  Jan 03 '14 at 15:29

1 Answers1

1

Not sure why my mind jumped straight to release vs debug, but the hardware itself is inconsistent even on the same processor. Is floating-point math consistent in C#? Can it be? The short answer being that intermediates can use higher precision values some of the time generating different results depending on when things get truncated.

Old answer:
There are differences in the release vs debug check out this for getting started.

Float/double precision in debug/release modes

If you need highly consistent results you might want decimals not doubles.

Community
  • 1
  • 1
Sign
  • 1,919
  • 18
  • 33
  • 1
    Thanks for your reply, however this is not a difference between release and debug, I've edited the OP to show that. also the point about decimals does not explain the inconsistency. – se5a Jan 02 '14 at 20:41
  • your edited reply has a very usefull link. I will be reading through the replies on that one as it's the question that this problem will eventually lead to. I've not yet seen anything in that link about inconsistencies on the same processor there though. also, one reason I'm using doubles is because most the System.Math functions require a double as input, ie Math.Cos etc. I may have to rethink that and write my own math class... – se5a Jan 02 '14 at 21:12
  • 2
    @se5a: This is a frequently asked question. See for instance also http://stackoverflow.com/questions/8795550/casting-a-result-to-float-in-method-returning-float-changes-result/8795656#8795656 – Eric Lippert Jan 02 '14 at 22:36
  • ok so if I understand it all correctly, sometimes the compiler decides it's a good idea to cast the result to a float and other times not. if I use: double result = (float)(hd * pi180); I get the same result each time. (3.1415927410125732) So when and why does the compiler decide to do it one way and not the other? – se5a Jan 02 '14 at 22:55
  • 4
    @se5a: The official line is: the compiler can do so at its whim. In practice, the compiler truncates back to the "natural" precision when (1) there's an explicit cast, (2) the value is stored to a heap location (that is, a field of a class type, or a field of a struct where the struct is on the heap, or an array element.) Other than those situations it is free to do math in higher precision for any reason whatsoever. – Eric Lippert Jan 02 '14 at 23:07
  • 7
    @se5a: You might wonder *why* this oddity exists. The reason is that there are a small number of floating point registers available on the chip, and doing math in those registers is (1) faster, and (2) higher precision. But because there are a limited number of them, sometimes the values have to be "kicked out" of the registers, which truncates them. The exact details of how the registers are scheduled is implementation-defined. – Eric Lippert Jan 02 '14 at 23:10
  • Ah. that explains it a bit more. still not sure why in my example it was using higher precision for the first iteration but not for the second, but I think I understand it enough to satisfy my curiosity. – se5a Jan 02 '14 at 23:41
  • So basically you're only allowed to have a certain amount of doubles at a time, and if other programs are running, you might not get any at all? Why would anyone even want to use a double, then, if they're so unreliable? – ekolis Jan 02 '14 at 23:51
  • Acutaly yeah. Ed brings up a point and maybe I don't understand it. I'm using doubles, so it should do the math as doubles, except it's sometimes doing the math as floats... but if I'm understanding what you've said, if I'm using *floats* it will sometimes do the math as doubles, however what I'm experiencing is the other way around. – se5a Jan 02 '14 at 23:58
  • Also, if there's a finite number of registers, shared between all running programs, why are we even getting vaguely consistent results? Shouldn't it sometimes work (say, when there are not many other programs running), sometimes not, and sometimes have both numbers come up incorrectly cast to floats? – ekolis Jan 03 '14 at 00:12
  • @ekolis, doubles are fast, has a large range but is not exact so people use them where they need speed, a large range and don't need exact values. If you need exact values, use another type. – adrianm Jan 03 '14 at 05:05
  • @ekolis: The registers are not shared between processes; when the CPU switches over to a different process its register state gets restored to how it was when it last switched away. But within a process the jitter is allowed to generate code that uses those registers in any manner that it chooses, and that can change from moment to moment depending on what the jitter believes is the most efficient way to schedule the registers. – Eric Lippert Jan 03 '14 at 17:45
  • @ekolis: You have to understand what the by-design purpose of doubles is. They were designed to do *bulk scientific calculations* as *quickly* and with *as much precision* as possible. The idea is to have *way more precision than the measurement*; if your experiment gives data that is accurate to 0.0001 and the calculation engine is accurate to 0.0000000000000001 then the calculation engine is introducing billions of times less error than is already present in the data. An error that small is irrelevant; if the error can be made smaller while making the calculation faster, that's always good! – Eric Lippert Jan 03 '14 at 17:55
  • @ekolis: The problem is then that people use doubles for things they were never designed for, like, say, replaying a video game scenario given only the input timings and expecting that the results of the physics simulation will be bit-for-bit identical on the replay. That's not the scenario that doubles were designed for, so it is unfortunate that people use them that way. – Eric Lippert Jan 03 '14 at 17:57
  • @EricLippert: Perhaps that's not what they were designed for, but if so, those limitations should have been documented more clearly. I don't think it's reasonable for Microsoft to assume that everyone using .NET knows the implementation details, or even the IEEE specs, of floating-point numbers! – ekolis Jan 04 '14 at 13:42
  • @ekolis: The documentation says: `Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an “extended” or “long double” floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. C# allows a higher precision type to be used for all floating-point operations.` And then goes on to give examples of how this can change results. – Eric Lippert Jan 04 '14 at 15:01
  • @ekolis: Please read the last paragraph of section 4.1.6 of the specification and explain to me how you would like that to be made more clear. I'll be happy to pass your suggestion on to the documentation manager for C#. – Eric Lippert Jan 04 '14 at 15:03
  • @EricLippert: Unfortunately, the spec seems to be locked behind a paywall. If what you quote is the paragraph in question, though, it says nothing about doubles being automatically downcast to floats, which is what we're seeing here - only about floats being upcast to doubles. What concerns me is that the CLR appears to be violating the basic contract of the double data type, which is to store a high precision floating point number, and if I can't trust it to do even this simple task, what should I trust it to do? – ekolis Jan 04 '14 at 16:53
  • @ekolis: Doubles are never automatically downcast to floats in the CLR; if that is the conclusion that you are reaching then your analysis contains an error. I've seen no evidence presented here that doubles are being downcast to floats automatically. The specification is here: http://www.microsoft.com/en-us/download/details.aspx?id=7029 – Eric Lippert Jan 04 '14 at 17:32
  • @ekolis: Or, look in `Program Files (x86)/Microsoft Visual Studio 12.0/VC#/Specifications/1033` -- some versions of VS install the specification on your machine automatically. – Eric Lippert Jan 04 '14 at 17:39
  • @ekolis: Or, ask StackOverflow: http://stackoverflow.com/questions/13467103/where-can-i-find-the-c-sharp-5-language-specification – Eric Lippert Jan 04 '14 at 17:39
  • @ekolis: Or get it from Jon Skeet: http://csharpindepth.com/articles/chapter1/Specifications.aspx – Eric Lippert Jan 04 '14 at 17:40
  • @EricLippert: OK, thanks for the clarification! Guess there must be some other bug in our code! We eventually solved the problem by switching to a fixed-point math library, though there still seems to be a bit of desync between the simulation and the replay, but it's not as glaring as before. – ekolis Jan 05 '14 at 22:12
  • I think the reason doubles are the goto for er noobs like us is primarily because doubles are seen as 'just more accurate floats' and that system.Math uses doubles, my thought process when writing this was: doubles will give us problems later, but only between different architectures - I'll get this going using doubles as a proof of concept, then cross the bridge of different architectures later. There's no competing data type or math library in the default C# librarys. – se5a Jan 05 '14 at 22:33
  • The reason we though it was getting downcast to a float was that in the last test example (where I cast to a float) of the OP I was getting the exact same result as the 'less correct' answer from the second to last example. – se5a Jan 05 '14 at 22:39