Here is how my calculator should work:
There is a JSON value where I can write the first multiplier - something like this:
{
"value1": 1.4
}
On the calculator I can write the second multiplier - only 10^n
numbers (10, 100, ..., 10000000). And my calc should return me an integer, as I know that always people who use my calc with write less numbers after the decimal point for the first multiplier than we have 0
s on the calc for the second multiplier. Yes, my calc is a very-very strange one.
Here are valid inputs:
v1=1.4; v2=100;
v1=1.414; v2=100000;
v1=1.1; v2=100;
What happens when I do this, for example for value1=1.4
and value2=10000
I get 13900. As far as float cannot hold any number sometimes it stores different numbers. For 1.4 internally it stores 1.399999 on my machine. I know why, but you know the QA engineer who tests my app tells me that I need to get 14000. Your calc does not work. How to make my calc so that I will print correct number?
P.S. Of course I have cut out my real problem from the context but the thing is that I have a float in a file and a 10^n number in my program as a user input. How to get correct result?
EDIT1: I don't ask why float works that way. I know why. I ask how to solve the problem even when float works that way.
EDIT2: I use RapidJson to read the JSON file which already returns me wrong number as a double precision number. I can't use libraries that provide with higher precision floating points.