0
uint64_t binmul(uint64_t a, uint64 b){
    uint64_t res = 0;

    while(b > 0){
        if(b & 1)
            res = res + a;
        a = 2 * a;

        b>>=1;      
    }

return res;
}

Would it be significantly faster than simply doing a * b? Assuming the number we are multiplying is very big.

  • Why don't you measure using a benchmark? Or look at the assembly produced and estimate how long it would take (compared to the assembly produced by a single native multiplication)? – Paul Hankin May 16 '20 at 05:52
  • `Is this way of doing multiplcation faster?` dropping an article(or alternative) and a vowel is not. – greybeard May 16 '20 at 06:18
  • usually its much faster to use math on ALU sized words instead of on individual bits ... Your code does it bit per bit and is not optimized ... for example `a=2*a` can be done with addition or bit shift ... see [Fast bignum square computation](https://stackoverflow.com/q/18465326/2521214) on other methods for fast multiplication with implementations ... – Spektre May 18 '20 at 06:19

1 Answers1

1

If the result of multiplication fits in system defined numeric data types, then the hardware and compilers we use today are already optimized to do what you are trying to achieve. Apparently, you should just trust it and use it. The odds that we figure out an optimal algorithm in an hour or few days, beats the decades of research done, tends to zero.

I can suggest you few readings relevant to your post:

  1. Karatsuba Multiplication
  2. Fast Fourier Transform Multiplication
  3. Schönhage–Strassen algorithm
  4. Toom–Cook multiplication

I just gave a few pointers. You'll get to read the variations of above or different algorithms as you read more.

Shridhar R Kulkarni
  • 6,653
  • 3
  • 37
  • 57