5

I need to perform computation of the form a 2^m / b where a/b is near 1, a and b are near 2^m and m is large (greater than 1000). I need both quotient and the remainder. I can do this in Java with

BigInteger[] computeScaledRatio(BigInteger a, BigInteger b, int m) {
    return a.shiftLeft(m).divideAndRemainder(b);
}

The cost is approximately the cost of dividing a 2m-bit number by an m-bit number.

Is there a way to make this faster?

If possible, I'd like to reduce the cost to approximately the cost of dividing two m-bit numbers.

I don't care if the resulting code is more complex and/or if some external libraries are needed.

I tried the following code out of desperation. The performance, not surprisingly, is dismal.

static final double LOG2_10 = Math.log(10) / Math.log(2);
static final BigDecimal TWO = BigDecimal.valueOf(2);
BigInteger[] computeScaledRatio(BigInteger a, BigInteger b, int m) {
    int percession = (int) Math.ceil((2 * m) / LOG2_10);
    BigDecimal t = new BigDecimal(a).divide(new BigDecimal(b),
            new MathContext(percession));
    t = t.multiply(TWO.pow(m));

    BigInteger q = t.toBigInteger();
    BigInteger r = t.subtract(t.setScale(0, RoundingMode.FLOOR))
            .multiply(new BigDecimal(b)).setScale(0, RoundingMode.HALF_UP)
            .toBigInteger();
    return new BigInteger[] { q, r };
}
  • There certainly are more efficient division algorithms for large integers. Have you tried just using GMP? I hear there are Java bindings. I presume you need full precision for whatever reason? – doynax Dec 17 '13 at 09:08
  • No, I mean there are more efficient divisions algorithms in the big-O sense (though access to a machine's native high-precision division and multiplication primitives, e.g. 128/64 division these days, is usually difficult in high-level languages). In any event the GNU multi-precision library contains well-tested and optimized functions aimed at operation very long integer operatons, so I'd say that's your first port of call. – doynax Dec 17 '13 at 09:32
  • 1
    No, you're asking whether there is a way to make this faster. Possibly through external libraries. I'm suggesting that GMP may or may not be better optimized but in any case it is easy to test and well worth checking out as a first step. – doynax Dec 17 '13 at 09:45
  • I believe that would be mpz_invert. Beware that multiplying by the modular inverse, if it exists, is not the same as division (unless the numerator is evenly divisible). Can you give us some more context on what you are attempting to achieve? Are you reusing the denominator across multiple operations? – doynax Dec 17 '13 at 09:52
  • This is decidedly not my field but I believe you would use Newton's method to compute the reciprocal or modular inverse and a fast integer multiplication algorithm (such as Karatsuba) for the intermediate multiplications. It will be difficult to match the performance of a well-tuned library aimed at this type of problem in the general case without exploiting domain knowledge (reducing the numerator by half will only buy you a constant factor) so I guess the question becomes how you will use the results and whether full precision is truly required? – doynax Dec 17 '13 at 10:26
  • Could you post some sample numbers and the kind of performance you are getting? There may be a way to adapt Knuths algorithm mentioned [here](http://stackoverflow.com/a/4771946/823393) because in your case `m` of the lower bits of the numerator will be zero and could therefore be ignored. – OldCurmudgeon Dec 17 '13 at 11:19

2 Answers2

1

For a start, you can do the following obvious optimization:

while (b is even) {
    b /= 2;
    m -= 1;
}

before multiplication/division.

Ingo
  • 36,037
  • 5
  • 53
  • 100
1

A bit-twiddling solution could grow out of this observation.

If a is na bits wide and b is nb bits wide then you could begin by dividing a * 2^nb by b. After that the trailing bits are just a repeating pattern. Think about it from your old long-division days - once you have shifted right your divisor far enough you are just bringing down zeros from your dividend.

E.g. a = 53 & b = 42 (both 6 bits wide)

a * 2^6 / b = 110101000000 / 101010 = 1010000

from now on the pattern 110000 repeats every 6 bits:

a * 2^12 / b = 110101000000000000 / 101010 = 1010000110000

a * 2^18 / b = 110101000000000000000000 / 101010 = 1010000110000110000

It remains to discover how to find the repeating pattern but it must repeat because it is derived from only the remainder of a * 2^6 / b because the only contribution that a * 2^m has after that is the bringing down of more zeros.

I suspect the final remainder is also calculable.

OldCurmudgeon
  • 64,482
  • 16
  • 119
  • 213