double
is a floating binary point type. In binary, the value of "a half" is 0.1, and the value of "a quarter" is 0.01 etc. There is no way of exactly representing "a tenth" in a finite binary representation, any more than you can exactly represent "a third" in decimal. The compiler is giving you the closest value it can to the value you've actually asked for.
If you want to store decimal values precisely because you care about the decimals (e.g. for current) you should use a decimal-based type such as NSDecimalNumber
, or an integer scaled appropriately (e.g. storing 15 for 15 cents instead of 0.15 dollars).
I have articles on binary and decimal floating point in .NET - NSDecimalNumber
in Objective-C is slightly different to decimal
in C# (see the documentation), but hopefully those articles will give you a bit more insight into what's actually happening.
EDIT: As noted in comments, typically decimal floating point types are significantly slower than binary floating point types, partly because they're often larger and partly because they don't have CPU support. If you have a hard performance requirement and you want to retain digits precisely, the "integer and implied scale" option is usually a good one, though a pain to code against as you need to take it into account every time you read the code :)