0

This is the code copied from this similar question's solution. Why does it fail after say comb(100, 50)? I believe it shouldn't exceed the limits of long long. And if it does, shouldn't I expect a runtime error instead of a wrong answer.

  long long comb(long long n, long long r) {
        long long limit = r;
        if(r>(n-r))
            limit = n-r;
        long long sum = 1;
        long long i=0;
        while(i<limit){
            sum *= n-i;
            sum /= i+1;
            i++;        
        }
        return sum % 1000000007;
    }

The correct answer I expect for 100C50 % 1000000007 is 538992043 (calculated using pascal triangle method) and not 354365244.

Community
  • 1
  • 1
kBisla
  • 590
  • 2
  • 8
  • 23
  • so what's your question exactly ? – Farouq Jouti Aug 15 '13 at 23:45
  • 1
    Change `sum *= n-i` to this: `printf("%lld\n", (sum *= n-i));` and see what gets printed on your console. – WhozCraig Aug 15 '13 at 23:52
  • 1
    Why would you expect a runtime error from integer overflow? Have you ever been told, or have any experience, that this results in a runtime error? The fact is that there's no hardware trap for integer overflow, so programs would run considerably more slowly if the compiler generated code to check for it. As for what the code actually does, it's trivial to add printf statements to track it and learn its detailed behavior. – Jim Balter Aug 16 '13 at 00:25

2 Answers2

1

I think you should read your pointed answer and then try to emulate some of the code. Most likely here the problem is, as was in your pointed answer, integer overflow.

Upon reading the long long declaration, C is only forced to allocate the space it thinks it should; by other words, depending on the platform, long long could allocate a variable amount of bytes, being no shorter that the limit set by the C convention.

This post should enlighten you about this particular aspect of the language.

On another note, and assuming you passed to C++, you could use a dedicated Decimal class to store the number's digits, then removing the long long int memory size problem.

About the runtime error, C isn't required to even alert if you are trying to write data that isn't allocated. Suppose you had a unsigned int (that by definition can store at least a number between 0 and 65535 (see this)). If you try to write a number higher that that, say 70000, C will happily write that; however the int you declared will store a absurd number. What happened was that to write 70000, C had to write a byte more than the int could hold. It will write, but the result will be garbage. It will only give a runtime error if you tried to access and write on to "protected" memory.

You can also try to use unsigned long long int that (for a compiler following the C99 standard) is guaranteed to hold at least a number between 0 and 18446744073709551615.

Community
  • 1
  • 1
Doktoro Reichard
  • 577
  • 1
  • 7
  • 22
0

I might very well be wrong but to me it seems like you're computing 100!/(50!^2) which is about 1*10^29 and an unsigned long long int can only handle 18,446,744,073,709,551,615. So it would be a case of overflow.