When I run your program in a debugger with n = 40
and r = 20
on a 32-bit binary compiled with Microsoft Visual Studio, then I don't get a segmentation fault, but I get a division by zero error in the following line:
return factorial(l)/(factorial(l-m)*factorial(m));
factorial(l-m)
and factorial(m)
both evaluate to factorial(20)
, which is 2,192,834,560
.
Assuming that sizeof(int) == 4
(32-bit), this number cannot be represented by a signed int
. Therefore, the int
overflows, which, according to the official C standard, causes undefined behavior.
However, even if the behavior is undefined, I can reasonably speculate that the following happens:
Due to the overflow, the number 2,192,834,560
will become -2,102,132,736
. This is because the second number corresponds to the first number in Two's complement binary representation.
Since this number is multiplied with itself in your code (assuming n = 40
and r = 20
), then the result of the multiplication will be 4,418,962,039,762,845,696
. This number certainly does not fit into a signed int
, so that an overflow occurs again.
The hexadecimal representation of this number is 0x3D534E9000000000
.
Since this large number does not fit into a 32-bit integer, all the excess bits are stripped off, which is equivalent to subjecting the result to modulo UINT_MAX + 1
(4,294,967,296
). The result of this modulo operation is 0
.
Therefore, the expression
factorial(l-m)*factorial(m)
evaluates to 0
.
This means that the line
return factorial(l)/(factorial(l-m)*factorial(m));
will cause a division by zero exception.
One way of solving the problem of handling large numbers is to use floating point numbers instead of integers. These can handle very large numbers without overflowing, but you may lose precision. If you use double
instead of float
, you will not so easily lose precision and, even if you do, the precision loss will be smaller.