2

For the classic interview question "How do you perform integer multiplication without the multiplication operator?", the easiest answer is, of course, the following linear-time algorithm in C:

int mult(int multiplicand, int multiplier)
{
    for (int i = 1; i < multiplier; i++)
    {
        multiplicand += multiplicand;
    }

    return multiplicand;
}

Of course, there is a faster algorithm. If we take advantage of the property that bit shifting to the left is equivalent to multiplying by 2 to the power of the number of bits shifted, we can bit-shift up to the nearest power of 2, and use our previous algorithm to add up from there. So, our code would now look something like this:

#include <math.h>

int log2( double n )
{
    return log(n) / log(2);
}

int mult(int multiplicand, int multiplier)
{
    int nearest_power = 2 ^ (floor(log2(multiplier)));
    multiplicand << nearest_power;
    for (int i = nearest_power; i < multiplier; i++)
    {
        multiplicand += multiplicand;
    }

    return multiplicand;
}

I'm having trouble determining what the time complexity of this algorithm is. I don't believe that O(n - 2^(floor(log2(n)))) is the correct way to express this, although (I think?) it's technically correct. Can anyone provide some insight on this?

rybosome
  • 5,046
  • 7
  • 44
  • 64
  • multiplicand += multiplicand; – Mikeb Mar 23 '12 at 18:17
  • 1
    your first algorithm is not a multiplication algorithm, but the result is `multiplicant * pow(2,multiplier)`, no? – Jens Gustedt Mar 23 '12 at 18:35
  • Ah, the off-by-one error...thanks for pointing that out, fixed. – rybosome Mar 23 '12 at 18:43
  • @HerroRygar - It is worth noting that that those two only work for a positive multiplier (my bit shift and add has that problem too). Also isn't `log2()` as expensive, as multiplication by shifting? – gbulmer Mar 23 '12 at 18:51
  • @HerroRygar - sorry, missed a bit... IMHO calculating `log(n) / log(2)` is more expensive than the entire operation of multiply by shifting. The calculation of `nearest power` is 'the highest set bit', which is much cheaper than log2. – gbulmer Mar 23 '12 at 19:03

3 Answers3

3

mulitplier - nearest_power can be as large as half of multiplier, and as it tends towards infinity the constant 0.5 there doesn't matter (not to mention we get rid of constants in Big O). The loop is therefore O(multiplier). I'm not sure about the bit-shifting.

Edit: I took more of a look around on the bit-shifting. As gbulmer says, it can be O(n), where n is the number of bits shifted. However, it can also be O(1) on certain architectures. See: Is bit shifting O(1) or O(n)?

However, it doesn't matter in this case! n > log2(n) for all valid n. So we have O(n) + O(multiplier) which is a subset of O(2*multiplier) due to the aforementioned relationship, and thus the whole algorithm is O(multiplier).

Community
  • 1
  • 1
sastraxi
  • 1,330
  • 11
  • 21
  • bit shifting is O(log2(multiplier)) or O(log2(min(multiplier,multiplicand))), see my answer – gbulmer Mar 23 '12 at 18:56
  • Bit shift of 1 place is O(1) (on all modern machines I'm aware of), IMHO, for this question is not relevant that an n bit shifts might be O(1). Multiplication needs n 1-bit shifts, where n is really the highest set bit, which for simplicity is the number of bits in the multiplier. Summary it doesn't matter that shifting an arbitrary number of places might be O(1), because we only need 1 bit shifts, but we need n of them to multiply by an n bit multiplier because we need to be able to do an add after each shift. – gbulmer Mar 23 '12 at 19:55
0

Edit

Let's look at the second posted algorithm, starting with:

int nearest_power = 2 ^ (floor(log2(multiplier)));

I believe calculating log2, is, rather pleasingly, O(log2(multiplier))

then nearest_power gets to the interval [multiplier/2 to multiplier], the magnitude of this is multiplier/2. This is the same as finding the highest set-bit for a positive number.

So the for loop is O(multiplier/2), the constant of 1/2 comes out, so it is O(n)

On average, it is half the interval away, which would be O(multiplier/4). But that is just the constant 1/4 * n, so it is still O(n), the constant is smaller but it is still O(n).

A faster algorithm.

Our intuitiion is we can multiply by an n digit number in n steps

In binary this is using 1-bit shift, 1-bit test and binary add to construct the whole answer. Each of those operations is O(1). This is long-multiplication, one digit at a time.

If we use O(1) operations for n, an x bit number, it is O(log2(n)) or O(x), where x is the number of bits in the number

This is an O(log2(n)) algorithm:

int mult(int multiplicand, int multiplier) {
    int product = 0;

    while (multiplier) {
        if (multiplier & 1) product += multiplicand;
        multiplicand <<= 1;
        multiplier >>= 1;
    }

    return product;
}

It is essentially how we do long multiplication.

Of course, the wise thing to do is use the smaller number as the multiplier. (I'll leave that as an exercise for the reader :-)

This only works for positive values, but by testing and remembering the signs of the input, operating on positive values, and then adjusting the sign, it works for all numbers.

gbulmer
  • 4,210
  • 18
  • 20
0

The point of finding the nearest power is so that your function runtime could get close to runtime O(1). This happens when 2^nearest_power is very close to the result of your addition.

Behind the scenes the whole "to the power of 2" is done with bit shifting.

So, to answer your question, the second version of your code is still worse case linear time: O(multiplier).
Your answer, O(n - 2^(floor(log2(n)))), is also not incorrect; it's just very precise and might be hard to do in your head quickly to find the bounds.

Adrian
  • 5,603
  • 8
  • 53
  • 85
  • So, then what I'm looking at is just a best case constant time, average case and worst case linear? – rybosome Mar 23 '12 at 18:22
  • you can look at best, avg, and worst case; you can also look at asymptotic bounds given by notations like big O. It depends what you want to see. Look on wikipedia for big O notation, small O,theta notation, and big and small omega. Big O is the most used one: http://en.wikipedia.org/wiki/Big_O_notation – Adrian Mar 23 '12 at 18:24
  • The technique used in the question to derive the nearest power of 2 is worse than O(log2(n)) so the O(1) is swamped. It's then followed by an O(n) additions - the distance between (x-1 and x/2) is x/4, which is still O(x), but with a smaller contant – gbulmer Mar 23 '12 at 20:05