2

Possible Duplicate:
Why can't decimal numbers be represented exactly in binary?

When I enter 0.1 as a double value the compiler is adding a tiny value on the end of it that is causing other calculations to go wrong in the program that I am running. My code simply says:

double temp = 0.1;

And I get this in variable viewer: http://img.skitch.com/20111122-nnrcgi4dtteg8aa3e8926r3fd4.jpg

Does anyone know why this is happening?

Thanks

Community
  • 1
  • 1

1 Answers1

11

double is a floating binary point type. In binary, the value of "a half" is 0.1, and the value of "a quarter" is 0.01 etc. There is no way of exactly representing "a tenth" in a finite binary representation, any more than you can exactly represent "a third" in decimal. The compiler is giving you the closest value it can to the value you've actually asked for.

If you want to store decimal values precisely because you care about the decimals (e.g. for current) you should use a decimal-based type such as NSDecimalNumber, or an integer scaled appropriately (e.g. storing 15 for 15 cents instead of 0.15 dollars).

I have articles on binary and decimal floating point in .NET - NSDecimalNumber in Objective-C is slightly different to decimal in C# (see the documentation), but hopefully those articles will give you a bit more insight into what's actually happening.

EDIT: As noted in comments, typically decimal floating point types are significantly slower than binary floating point types, partly because they're often larger and partly because they don't have CPU support. If you have a hard performance requirement and you want to retain digits precisely, the "integer and implied scale" option is usually a good one, though a pain to code against as you need to take it into account every time you read the code :)

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194