-1

I have this weird kind of error.

I am trying implement basic Euclidean algorithm using the BigInteger class as shown. When i run it, it throws StackoverFlowError, while if i debug it, it runs through correctly and gives me the correct answer.

I seriously dont understand the difference during debug and normal run.

      static BigInteger gcd(BigInteger a, BigInteger b) {
        if (a.equals(BigInteger.ZERO)) {
            return b;
        } else if (b.equals(BigInteger.ZERO)) {
            return a;
        }
        BigInteger max = a.max(b);
        BigInteger min = a.min(b);
        return gcd(max.subtract(min), min);
      }
maress
  • 3,533
  • 1
  • 19
  • 37

2 Answers2

0

The algorithm would required fewer calls if you replaced the repeated subtraction with a mod operation. Check what happens if max is very large and min is very small.

fgb
  • 18,439
  • 2
  • 38
  • 52
  • Using the mod operation works without throwing the error. Why does it throw the error on subtraction? Even for built-in type long, it throws the error – maress May 18 '12 at 11:26
  • If the input is a long, then the worst-case would be subtracting 1 from Long.MAX_VALUE each time, requiring Long.MAX_VALUE stack frames in total. This would eventually overflow the stack for any reasonable size. – fgb May 18 '12 at 11:41
  • So i guess, recursion is not the best thing to use here? During debugging i noticed that substantial difference between the values does results to a stackoverflow, even if the max value is substantially less than Long.MAX_VALUE. However, the reason for stackoverflow is appropriate, but the other fundemental question is, why is there no stackoverflow when debugging? The debug, runs through to completion? – maress May 18 '12 at 11:52
0

This is not the Euclidean algorithm for GCD. The correct GCD algorithm can be easily stated as a recursive procedure because the number of iterations is in practice small. But this incorrectly implemented algorithm can run for millions of iterations, e.g. if 'a' is 1,000,000 and 'b' is 1, which results in one million recursive calls.

(However, the call is a tail recursive call so a good Java implementation would actually optimize away the stack frame. However, it seems this does not happen here. Although for some unknown reason the debugger version could implement tail call optimization, though that sounds odd.)

In the actual algorithm, the recursive step is (a % b, b) instead of (a - b, b). Google for "euclid GCD algorithm".

Antti Huima
  • 25,136
  • 3
  • 52
  • 71