2
Debug.Log((int)(4.2f * 10.0f));

Result : 41 (in Unity 2017.2.1p4)

Result : 42 (in Unity 5.4.3p4)

why it differs??

Salmon
  • 63
  • 3
  • 12
  • Check this: https://stackoverflow.com/questions/1458633/how-to-deal-with-floating-point-number-precision-in-javascript – Immersive May 24 '18 at 10:10
  • hmm... I think result of 4.2f * 10.0f = 41.9999998f – Salmon May 24 '18 at 11:29
  • then it converts to int as 41. but why it is 42 in Unity 5.4.3p4? – Salmon May 24 '18 at 11:29
  • @연어회무침: Probably a different rounding mode, or it converts to double (exactly: 4.2 --> 4.20000000000000017763568394002504646778106689453125) – Rudy Velthuis May 24 '18 at 12:53
  • Possibly a change in the C# version. I always assume that the fractional part is just dropped, but maybe the newer version is rounding. Given that this is essentially a float-precision error, try running in a loop a few thousand times and counting how many times it spits back 41 or 42 – Immersive May 24 '18 at 13:26

2 Answers2

2

Floating point numbers use an approximate representation and floatng math is not deterministic. The same calculation can give different results, particularly on different computers, architectures, or compiler and optimization settings.

Thus, it's likely that there is a difference in compiler and optimization settings between Unity 5.4.3p4 and Unity 2017.2.1p4.

In your example, 4.2f * 10.0f can result in different values like 41.9999998f or 42.0000002f, which is casted to integer as 41 or 42 respectively.

For more information, see these posts:

sonnyb
  • 3,194
  • 2
  • 32
  • 42
2

The exact result of product of (single precision) float 4.2f and 10.f is

41.9999980926513671875

The nearest float to this exact result is 42.0f

But this exact result fits in a double precision, so if ever the expression is evaluated in single precision it will print 42, but if it is evaluated in double precision, then it will print 41.

IOW, the expression can be decomposed into:

x = 4.2f;
x = x * 10.0f;
Debug.Log((int)(x));

If the compiler decided to preserve a single precision x, it will print 42, but if it decided to evaluate x in double precision, it will print 41.

aka.nice
  • 9,100
  • 1
  • 28
  • 40
  • how did you know that the exact result is 41.9999980926513671875 ?? – Salmon May 28 '18 at 00:55
  • Emulate the decimal (4.2f) to binary conversion (by repeated division and emulating the rounding mode - to nearest float, tie to even), then emulate binary to decimal conversion. By hand it's overkill, but there are several language that can do that - if conforming strictly enough to IEEE 754 standards (I use Squeak Smalltalk `(ArbitraryPrecisionFloat readFrom: '4.2' numBits: 24) asTrueFraction printShowingMaxDecimalPlaces: 100`. In C, it could be as simple as `printf("%.20f\n',4.2f)` if printf does correctly its job of rounding. – aka.nice May 28 '18 at 14:12