Explanation Why this is not a duplicate of that pointed Q&A!:
I encountered this problem in relation to Objective-C minor implementation and NSNumber type has varying floating point precision and I just needed to know what really happens and what should I do to resolve this issue. That pointed Q&A doesn't suffice my question with its any answer.
Note: This is not regarding the types usage for the values. I was troubled to understand the behavior of NSNumbers
.
I have this very simple method which uses NSNumber
basically:
+ (NSNumber*)addNumber:(NSNumber*)firstNumber withSecondNumber:(NSNumber*)secondNumber {
NSNumber *result = [NSNumber numberWithDouble:([firstNumber doubleValue] + [secondNumber doubleValue])];
NSLog(@"Result: %@", result); // This will print out our result for testing
return result;
}
So if I call this method with parameters as follows:
[Fundamentals addNumber:[NSNumber numberWithDouble:5.0]
withSecondNumber:[NSNumber numberWithDouble:3.5]];
Printed output: 8.5 ---> Exactly what I was expecting
But if I call this method with the following parameters:
[Fundamentals addNumber:[NSNumber numberWithDouble:50.0]
withSecondNumber:[NSNumber numberWithDouble:36.54]];
It prints out the following: 86.53999999999999 ---> Which troubled me and still amazed me and questions my brain till it hurts! :(
Note: Not only when printing, even in debugger also it shows the value representation as it is printed out above.
Can somebody explain this please?
Thanks in advanced!