1

I stumbled upon this strange behavior of Convert class and I want to share.

Convert.ToInt (16,32,64) methods when used on double or decimal type round in a strange way:

from msdn:

converted value is rounded to the nearest 16-bit signed integer. If value is halfway between two whole numbers, the even number is returned; that is, 4.5 is converted to 4, and 5.5 is converted to 6.

This is a really strange behavior and it can lead to hard to find bugs.

I looked with reflector to analyzed the code but it is an extern method:

//Convert.ToInt32
[SecuritySafeCritical, __DynamicallyInvokable]
public static int ToInt32(decimal value)
{
    return decimal.FCallToInt32(value);
}

//decimal
[MethodImpl(MethodImplOptions.InternalCall), SecurityCritical]
internal static extern int FCallToInt32(decimal d);

I know that Convert class is meant to converts a base data type to another base data type and it should not be used for rounding value but you will agree that we would have to expect a standard rounding.

My question is why this behavior and why it is not stamped with big red fonts on MSDN

giammin
  • 18,620
  • 8
  • 71
  • 89
  • It's bankers rounding, you need to use it in any statistical application, else number are skewed upwards with large number sets. – flindeberg Oct 30 '14 at 14:05
  • See the [documentation](http://msdn.microsoft.com/en-us/library/system.math.round(v=vs.110).aspx), under the heading "Rounding to nearest, or banker's rounding." That said, questions of "why" something in the framework behaves the way it does are generally considered not constructive. – Jim Mischel Oct 30 '14 at 14:06
  • ops I swear I searched in stackoverflow before asking but did not find anything. sorry for the duplicate! – giammin Oct 30 '14 at 14:09

1 Answers1

2

It's called banker's rounding and it's specifically used in financial applications. Since decimal is designed for financial applications, it's pretty obvious why it's the default rounding method for decimal, isn't it? :)

The idea behind it is that if you always rounded one way, the .5 numbers would break the distribution of numbers, or in other words, your rounding errors would accumulate significantly. By rounding up or down based on evenness, you will usually end up rounding up 50% of the time in those cases, and down the rest - thus the error will tend to stay very small.

This is something that was used long before computers :)

Luaan
  • 62,244
  • 7
  • 97
  • 116
  • ok but why it is replicated for doubles... – giammin Oct 30 '14 at 14:04
  • If it wasn't replicated for doubles, that would be inconsistent. – DavidG Oct 30 '14 at 14:06
  • @giammin It doesn't really matter. It's pretty much impossible to get `x.5` as a `double`, because binary floating points aren't decimal-precise, just binary-precise. So the only real `.5` number you can represent is `0.5`. In effect, `double` does this kind of thing "automatically", because you will always get numbers like `2.50000001`, which is *not* midpoint rounded in the first place. – Luaan Oct 30 '14 at 14:06
  • @Luaan thanks, today I learned a new things! :) – giammin Oct 30 '14 at 14:17
  • @Luaan I don't think I understand your comment. 1.5, 2.5, 3.5, ... , 4503599627370493.5, 4503599627370494.5, 4503599627370495.5 can each be represented exactly in IEEE 754 64-bit binary floating point. x.5 numbers show up in practice as e.g. the arithmetic mean of a pair of exactly representable integers one of which is even and the other odd. – Patricia Shanahan Oct 30 '14 at 15:41
  • @PatriciaShanahan Yes, I shouldn't have used `.5`, that's misleading. You're usually rounding to two decimal places in financial applications, where you get the inaccuracy issue - numbers like `0.15`, which require infinite binary representation (while e.g. `0.25` or `0.5` don't). Of course, once you start adding numbers up (quite common), you're quickly going to get numbers that are "almost, but not quite 2.5". Midpoint rounding doesn't apply there anymore. – Luaan Oct 30 '14 at 15:52