2

I recently came up with a small issue in ios swift when converting Double to Int.

var myDoubleValue: Double = 10.2
var newInt = Int(myDoubleValue * 100) //1019

There I am multiplying 10.2 by 100 and NOT getting the expected value (1020) and instead 1019...

But works correct in most of the cases as follows

var myDoubleValue: Double = 10.3
var newInt = Int(myDoubleValue * 100) //1030

Also I further tested it with Java and the same issue there as well..

double myDoubleValue = 10.2;
int myIntValue1 = (int)(myDoubleValue * 100); //1019

I could overcome the issue by performing 'round' before convert to Int as follows...

var myDoubleValue: Double = 10.2
var newInt = Int(round(myDoubleValue * 100)) //1020

But this goes in to something I don't want if the value is having 3 decimal places as follows, because it's rounding at that time (10.206 ---> 1021)

Cœur
  • 37,241
  • 25
  • 195
  • 267
JibW
  • 4,538
  • 17
  • 66
  • 101
  • 1
    First, the displayed value is just that, it is not the entire value, use formatting to see more digits. Second, 10.2 or 10.3 can not be exactly represented in a base2 floating point number so results can be very slightly different than expected. Third, it might help if you spent some time learning about floating point numbers. – zaph Aug 25 '15 at 14:09
  • 3
    Strongly related: [Is floating point math broken?](http://stackoverflow.com/questions/588004/is-floating-point-math-broken) – Martin R Aug 25 '15 at 14:24

4 Answers4

2

This is the result of round up/truncation errors. Actually,

  myDoubleValue = 10.2 

is something like

  myDoubleValue = 10.1999999999999

so

 myDoubleValue * 100 = 1019.999999999

and after being trimmed to int you have 1019. A better practice is to round up rather than trim:

 double myDoubleValue = 10.2;
 int myIntValue1 = (int)(myDoubleValue * 100 + 0.5); // note "+ 0.5"
 System.out.println(myIntValue1);
Dmitry Bychenko
  • 180,369
  • 20
  • 160
  • 215
2

A displayed value is just that, it is not the entire value, use formatting to see more digits. 10.2 or 10.3 can not be exactly represented in a base2 floating point number so results can be very slightly different than expected.

Here is an example that shows what is happening:

var myDoubleValue1: Double = 10.2
print(String(format: "myDoubleValue1: %.17f", myDoubleValue1))
// myDoubleValue1: 10.19999999999999929

var myDoubleValue2: Double = 10.3
print(String(format: "myDoubleValue2: %.17f", myDoubleValue2))
// myDoubleValue2: 10.30000000000000071

Note that 10.2 converts to a floating point Double as slightly below 10.2 and the will be truncated down.

Note that 10.3 converts to a floating point Double as slightly above 10.3 and the will not be truncated down.

If exact decimal math, such as in a calculator or when dealing with money, is required use NSDecimalNumber

zaph
  • 111,848
  • 21
  • 189
  • 228
1

Java answer

This is because the double and float types and wrappers do not guarantee precise values when operated upon.

In Java, the recommended idiom for what you're looking for is:

BigDecimal myDoubleValue = new BigDecimal("10.2");
int myInt = myDoubleValue.multiply(new BigDecimal("100")).intValue();
System.out.println(myInt);

Output

1020
Mena
  • 47,782
  • 11
  • 87
  • 106
  • 1
    In Swift and Objective-C there is the `NSDecimalNumber` class that handles decimal numbers exactly. – zaph Aug 25 '15 at 14:11
  • @zaph thanks for the comment. I know nearly nothing about Objective-C / Swift, so I couldn't answer on that part :) – Mena Aug 25 '15 at 14:12
0

It appears that you want to round up a small amount of error but you want to round down other wise.

You can do this

double d = 10.199999;
long l = (long) (d * 100 + 0.01); // l = 10.20

double d = 10.206;
long l = (long) (d * 100 + 0.01); // l = 10.20
Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130