172

Does anyone know why integer division in C# returns an integer and not a float? What is the idea behind it? (Is it only a legacy of C/C++?)

In C#:

float x = 13 / 4;   
//== operator is overridden here to use epsilon compare
if (x == 3.0)
   print 'Hello world';

Result of this code would be:

'Hello world'

Strictly speaking, there is no such thing as integer division (division by definition is an operation which produces a rational number, integers are a very small subset of which.)

BanditoBunny
  • 3,658
  • 5
  • 32
  • 40
  • 64
    because it is `integer` division not `floating point` division. – Hunter McMillen Jun 01 '12 at 13:33
  • it does have to (in VB.Net) it is implemented differently in a natural mathematical way where all the result of the division operation is an irrational number. – BanditoBunny Jun 01 '12 at 13:38
  • 4
    I think you mean *rational numbers*. See [wikipedia](http://en.wikipedia.org/wiki/Division_(mathematics)): Dividing two integers may result in a remainder. To complete the division of the remainder, the number system is extended to include fractions or rational numbers as they are more generally called. – crashmstr Jun 01 '12 at 13:52
  • 6
    This is the reason I'm not a fan of "copying syntax" in languages. I come from VB thinking "C# is .NET" not "C# is like C". My mistake I guess, but in this case I prefer the VB way. If they went through the trouble of generating a compiler error when using uninitialized simple types (you don't even get a warning in C) then why not warn you when you're assigning integer division to a float? – darda Apr 10 '13 at 19:37
  • 1
    Other languages have different operators for [*Real*](https://en.wikipedia.org/wiki/Real_number) vs [*Integer* division.](https://mathworld.wolfram.com/IntegerDivision.html) `13 / 4 = 3.25` verses `13 div 4 = 3`. – Ian Boyd Nov 04 '21 at 14:03

8 Answers8

128

While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.

First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.

Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.

Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.

Servy
  • 202,030
  • 26
  • 332
  • 449
  • 6
    In VB.Net .Net architects made another decision: / - always a float division, \ - integer division, so it is kind of inconsistent, except if you consider C++ legacy; – BanditoBunny Jun 01 '12 at 13:46
  • Concerning casting: the problem is you don't always know (keep in mind) what is the result of your operation is - it could be a complex formula which has multiples variables and one of them being double is enough for this thing to work properly (need to check Resharper rules, maybe there is on). – BanditoBunny Jun 01 '12 at 13:52
  • 4
    You can determine, at compile time, whether the `/` operator will be performing integer or floating point division (unless you're using dynamic). If it's hard for *you* to figure it out because you're doing so much on that one line, then I'd suggest breaking that line up into several lines so that its easier to figure out whether the operands are integers or floating point types. Future readers of your code will likely appreciate it. – Servy Jun 01 '12 at 13:55
  • 7
    I personally find it problematic that I always have to think what are variables I am dividing, I count it as a wasteful use of my attention. – BanditoBunny Jun 01 '12 at 14:00
  • 1
    I agree with @BanditoBunny. Have two operators and leave it up to the programmer how fast he wants to go. Unless C# runs on MCU's I can't hardly imagine having to optimize anything to that extent. – darda Apr 10 '13 at 19:42
  • 8
    @pelesl As it would be a *huge* breaking change to do that for which an astronomical number of programs would be broken I can say with complete confidence that it will never happen in C#. That's the kind of thing that needs to be done from day 1 in a language or not at all. – Servy Apr 10 '13 at 19:46
  • 2
    @Servy: There are a lot of things like that in C, C++, and C#. Personally, I think C# would be a better language if there had been a different operator for integer division and, to avoid having legitimate code yield astonishing behavior, the `int/int` operator were simply illegal [with a diagnostic specifying that code must cast an operand or use the other operator, depending upon which behavior was desired]. Were some other good token sequence available for integer division, it might be possible to deprecate the use of `/` for that purpose, but I don't know what would be practical. – supercat Dec 20 '13 at 21:41
  • @Servy: The unfortunate `/` operator in C and Fortran isn't as bad as the way C notates octal numbers, though. I really wish there were some other way to notate octal and decimal numbers [e.g. being able to represent twenty-three as [e.g. 0q27 or 0t00035]; the former so compilers could warn about any numerical literals with leading zeroes; the latter to allow values to be formed via token pasting. – supercat Dec 20 '13 at 21:45
  • 1
    "integer division is quite a bit faster" - actually, floating point division is faster on modern processors (it probably wasn't when C# was created though) – Njol Mar 23 '16 at 11:33
  • 1
    I'd like to point out that you can hover your mouse over operators in at least Visual Studio 2015 and see what version of the operator is being recognized. – jxramos May 09 '17 at 17:53
  • 1
    @Servy "That's the kind of thing that needs to be done from day 1 in a language or not at all." [Python proves you wrong.](https://www.python.org/dev/peps/pep-0238/) Admittedly, it's a harder change in .NET due to static typing, but it's still a dumb hold over that isn't nearly as common as your answer makes it out to be. Having it readily available? Sure. Makes complete sense. Having it as the default? Bonkers for over 20 years. – jpmc26 Aug 22 '18 at 18:53
  • @jpmc26 Apparently Python is more open to breaking changes than C# is. They've made it very clear that new language versions not having breaking changes if at all possible is a *very* high priority. Like I said, I'm *very* confident this would never happen in C#, due to the priorities the language team has made clear are most important for the language. That said, do you have a reference showing that the behavior of the operator *changed* in Python, because I find that hard to believe. – Servy Aug 22 '18 at 18:55
  • @Servy Yes, I edited in a link to the official PEP document in my above comment. Python is more open to breaking changes because they've learned pretty good ways of managing them. (Although admittedly, the `str` changes from 2 to 3 weren't handled as well as they should have been.) It was literally an opt in change for about 7 years before they released a version where it became mandatory. See the [`__future__` module](https://docs.python.org/2/library/__future__.html). Python has a lot to teach other platforms. – jpmc26 Aug 22 '18 at 19:00
  • 3
    MSBasic always had \ and /, long before .NET came around. And please let's not say that we want as many breaking changes as Python. Nearly every Python project requires a specific version to work. It's a nightmare. – PRMan Apr 30 '19 at 17:39
89

See C# specification. There are three types of division operators

  • Integer division
  • Floating-point division
  • Decimal division

In your case we have Integer division, with following rules applied:

The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.

I think the reason why C# use this type of division for integers (some languages return floating result) is hardware - integers division is faster and simpler.

Sergey Berezovskiy
  • 232,247
  • 41
  • 429
  • 459
46

Each data type is capable of overloading each operator. If both the numerator and the denominator are integers, the integer type will perform the division operation and it will return an integer type. If you want floating point division, you must cast one or more of the number to floating point types before dividing them. For instance:

int x = 13;
int y = 4;
float x = (float)y / (float)z;

or, if you are using literals:

float x = 13f / 4f;

Keep in mind, floating points are not precise. If you care about precision, use something like the decimal type, instead.

Steven Doggart
  • 43,358
  • 8
  • 68
  • 105
  • 2
    +1 for mentioning that only one term needs to be float to make floating point division. – Xynariz Apr 03 '14 at 22:48
  • Obviously your statement about precision is the right in the context of learning and in making it not to complicated to understand. As we need to be as precise as possible in our jobs I still want to clarify on the precision: According to IEE 754-1985 you CAN get an exact result (even though this is mostly not the case). You can get an precise result, when the calculation values have been presented exact before and the result is - simply speaking - a sum of powers of 2. Even though it can't be best practice to rely on that precision in those special cases. – L. Monty Jul 07 '18 at 12:11
  • Addition on precision: The propability to get an exact result drastically improves as the result is close to 1 or -1. It might be a little confusing that this propability still remains 0 as there are infinite numbers and a finite number of results, which can be presented exactly. :) – L. Monty Jul 07 '18 at 12:19
  • 1
    @L.Monty thanks for bringing that up. I’ve learned more about floating points since writing this answer and the point you are making is fair. Technically, I would still say that my statement “floating points are not precise” is acceptable, in the sense that just because something can be accurate sometimes doesn’t mean it, as a whole, is precise. As they say, a broken clock is right twice a day, but I’d never call one a precision instrument. I’m actually rather surprised that’s the part that bothered you more than my suggestion that the decimal type _is_ precise. – Steven Doggart Jul 07 '18 at 12:59
  • Decimals are imprecise, for all the same reasons that floats are; it’s just that floats are base-2 and decimals are base-10. For instance, a decimal type can’t accurately hold the precise value of 1/3. – Steven Doggart Jul 07 '18 at 13:01
14

Might be useful:

double a = 5.0/2.0;   
Console.WriteLine (a);      // 2.5

double b = 5/2;   
Console.WriteLine (b);      // 2

int c = 5/2;   
Console.WriteLine (c);      // 2

double d = 5f/2f;   
Console.WriteLine (d);      // 2.5
erenozten
  • 180
  • 1
  • 6
13

Since you don't use any suffix, the literals 13 and 4 are interpreted as integer:

Manual:

If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.

Thus, since you declare 13 as integer, integer division will be performed:

Manual:

For an operation of the form x / y, binary operator overload resolution is applied to select a specific operator implementation. The operands are converted to the parameter types of the selected operator, and the type of the result is the return type of the operator.

The predefined division operators are listed below. The operators all compute the quotient of x and y.

Integer division:

int operator /(int x, int y);
uint operator /(uint x, uint y);
long operator /(long x, long y);
ulong operator /(ulong x, ulong y);

And so rounding down occurs:

The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.

If you do the following:

int x = 13f / 4f;

You'll receive a compiler error, since a floating-point division (the / operator of 13f) results in a float, which cannot be cast to int implicitly.

If you want the division to be a floating-point division, you'll have to make the result a float:

float x = 13 / 4;

Notice that you'll still divide integers, which will implicitly be cast to float: the result will be 3.0. To explicitly declare the operands as float, using the f suffix (13f, 4f).

CodeCaster
  • 147,647
  • 23
  • 218
  • 272
  • 2
    +1 for explaining that you can have the answer as a float but still do integer division. Additionally, another common way I've seen to force floating point division is to multiply the first term of the division by `1.0`. – Xynariz Apr 03 '14 at 22:46
9

It's just a basic operation.

Remember when you learned to divide. In the beginning we solved 9/6 = 1 with remainder 3.

9 / 6 == 1  //true
9 % 6 == 3 // true

The /-operator in combination with the %-operator are used to retrieve those values.

L. Monty
  • 872
  • 9
  • 17
8

The result will always be of type that has the greater range of the numerator and the denominator. The exceptions are byte and short, which produce int (Int32).

var a = (byte)5 / (byte)2;  // 2 (Int32)
var b = (short)5 / (byte)2; // 2 (Int32)
var c = 5 / 2;              // 2 (Int32)
var d = 5 / 2U;             // 2 (UInt32)
var e = 5L / 2U;            // 2 (Int64)
var f = 5L / 2UL;           // 2 (UInt64)
var g = 5F / 2UL;           // 2.5 (Single/float)
var h = 5F / 2D;            // 2.5 (Double)
var i = 5.0 / 2F;           // 2.5 (Double)
var j = 5M / 2;             // 2.5 (Decimal)
var k = 5M / 2F;            // Not allowed

There is no implicit conversion between floating-point types and the decimal type, so division between them is not allowed. You have to explicitly cast and decide which one you want (Decimal has more precision and a smaller range compared to floating-point types).

z m
  • 1,493
  • 2
  • 19
  • 21
1

As a little trick to know what you are obtaining you can use var, so the compiler will tell you the type to expect:

int a = 1;
int b = 2;
var result = a/b;

your compiler will tell you that result would be of type int here.

Jorge
  • 76
  • 3