How do I divide two integers to get a double?
-
11Assuming this was asked in an interview - integer division always results in integer. You must use a type cast like the ones shown below. – Sesh Mar 19 '09 at 04:23
-
2Different types of divisions: Integer ,Floating-point ,Decimal -discussed in [Why integer division in c# returns an integer but not a float?](//stackoverflow.com/q/10851273) – Michael Freidgeim Mar 27 '17 at 01:02
9 Answers
You want to cast the numbers:
double num3 = (double)num1/(double)num2;
Note: If any of the arguments in C# is a double
, a double
divide is used which results in a double
. So, the following would work too:
double num3 = (double)num1/num2;
For more information see:

- 101,809
- 122
- 424
- 632

- 8,092
- 4
- 27
- 28
-
5Don't know if this is the same in C#, but C only requires you to cast the first - it'll automatically make double/int a double. – paxdiablo Mar 19 '09 at 04:34
-
4@Pax, If any of the args in C or C# are a double, a double divide is used (resulting in a double). – strager Mar 19 '09 at 05:18
-
52Be careful not to do this:- `double num3 = (double)(num1/num2);`. This will just give you a double representation of the result of the integer division! – The Lonely Coder Oct 09 '14 at 13:57
-
Supposing you don't need the extra precision, is there a reason to cast to `double` instead of `float`? I can see the question calls for `double` but I'm curious anyway. – Kyle Delaney May 17 '17 at 01:31
-
1@KyleDelaney Just cause in C# we normally use `double` and not `float`. When you write a variable just like `var a = 1.0;`, this 1.0 is always a `double`. I guess this is the main reason. – this.myself Feb 27 '18 at 13:37
-
1
Complementing the @NoahD's answer
To have a greater precision you can cast to decimal:
(decimal)100/863
//0.1158748551564310544611819235
Or:
Decimal.Divide(100, 863)
//0.1158748551564310544611819235
Double are represented allocating 64 bits while decimal uses 128
(double)100/863
//0.11587485515643106
In depth explanation of "precision"
For more details about the floating point representation in binary and its precision take a look at this article from Jon Skeet where he talks about floats
and doubles
and this one where he talks about decimals
.

- 9,475
- 5
- 65
- 73
-
2Wrong! `double` has a precision of 53 bits, and it's a **binary** floating-point format, whereas `decimal` is a... decimal one, of course, with [96 bits of precision](https://stackoverflow.com/q/3801440/995714). So `double` is precise to ~15-17 decimal digits and decimal 28-29 digits (and not twice the precision of `double`). More importantly `decimal` actually uses only 102 of the 128 bits – phuclv Jul 08 '18 at 06:32
-
Thanks @phuclv, fixed that. I meant "space allocation". You were right about the precision of `decimals` (96), but `doubles` has [52 bits of mantissa](http://csharpindepth.com/Articles/General/FloatingPoint.aspx), not 53. – fabriciorissetto Sep 21 '18 at 21:50
-
1yes, the mantissa has 52 bits, but there's still a hidden bit, resulting in a 53-bit significand. [Is it 52 or 53 bits of floating point precision?](https://stackoverflow.com/q/18409496/995714) – phuclv Sep 22 '18 at 01:09
-
cast the integers to doubles.

- 36,783
- 6
- 67
- 86
-
To be specific, you can cast an integer to a double like so: (double)myIntegerValue – Whiplash Nov 29 '16 at 16:18
Convert one of them to a double first. This form works in many languages:
real_result = (int_numerator + 0.0) / int_denominator

- 299,747
- 42
- 398
- 622
-
2
-
1@Basic there's 100 ways to do it. I prefer addition just because it's faster, although casting is obviously faster still. – Mark Ransom Mar 16 '17 at 23:23
var firstNumber=5000,
secondeNumber=37;
var decimalResult = decimal.Divide(firstNumber,secondeNumber);
Console.WriteLine(decimalResult );

- 1,339
- 1
- 17
- 19
-
-
2You can [edit] your post to add information. Please don't add it in comments – Suraj Rao Dec 17 '20 at 13:53
I have went through most of the answers and im pretty sure that it's unachievable. Whatever you try to divide two int into double or float is not gonna happen. But you have tons of methods to make the calculation happen, just cast them into float or double before the calculation will be fine.

- 1
-
1Hello and welcome to SO! Please read the [tour](https://stackoverflow.com/tour), and [How do I write a good answer?](https://stackoverflow.com/help/how-to-answer) Please make sure to include new information (that is not in the other answers) in your answer. – Tomer Shetah Dec 25 '20 at 08:37
The easiest way to do that is adding decimal places to your integer.
Ex.:
var v1 = 1 / 30 //the result is 0
var v2 = 1.00 / 30.00 //the result is 0.033333333333333333

- 305
- 1
- 3
- 10
In the comment to the accepted answer there is a distinction made which seems to be worth highlighting in a separate answer.
The correct code:
double num3 = (double)num1/(double)num2;
is not the same as casting the result of integer division:
double num3 = (double)(num1/num2);
Given num1 = 7 and num2 = 12:
The correct code will result in num3 = 0.5833333
Casting the result of integer division will result in num3 = 0.00

- 160
- 6