1

I came across this question, where given any number x, find the minimum number of operations, where an operation is either x-1, x+1, or x/2, such that x=1 at the end of all operations.

I understand the solution of just going through looking doing the operations (based on bit operations). However, this comes out to be something like O(log n). Is there an O(1) solution to this problem? If not, is there a better solution that just running through the actual operations?

user760900
  • 298
  • 1
  • 12
  • @Yunnosch How would you suggest I rephrase it? I'm sorry, I don't see how I would rephrase it so that it is clearer. Please feel free to suggest an edit. – user760900 Jul 29 '20 at 05:36
  • You already did. Actually I think there might be a way in O(size_in_bits), which might or might not be "O(log n)". Would you like to define what for you "n" means? – Yunnosch Jul 29 '20 at 05:37
  • Just because of the voting habits of users here on StackOverflow, you might want to show your code for the solution you are thinking of - and tag the language. – Yunnosch Jul 29 '20 at 05:40
  • @Jasmeet Promising. – Yunnosch Jul 29 '20 at 05:42
  • @Yunnosch It seems to answer the question as in the most efficient way to do this. But there is no O(1) answer? The problem has to be stepped through? – user760900 Jul 29 '20 at 06:13
  • 1
    No, there is no O(1) answers as the input must be fully read to provide a correct result and you did not mention if the input numbers have an arbitrary size, so they probably can have one. – Jérôme Richard Jul 29 '20 at 07:14
  • What is `x/2` exactly doing? Is this integer division? – Henry Jul 29 '20 at 08:20
  • @Henry yes, it is integer division. `x/2` can only happen if `x` is even (allowing for no decimal values) – user760900 Jul 29 '20 at 18:06

3 Answers3

1

Big-O isn't everything, especially if you already have an O(logn) solution.

Theoretical reasoning

When talking about Big-O time complexity, the asymptotic behaviour for ever-increasing values is relevant. So, inherently we can't limit ourselves to some fixed-length integer computations.

As today there is no CPU that can do operations on unlimited-bits integers in constant time (and probably never will be), a serious O(1) is impossible in itself except for the very special cases where most of the bits are irrelevant (e.g. it's O(1) to decide whether a number is even, just looking at the least significant bit).

So, O(logn) is the best you can hope for.

If you limit the bit-length (thus violating the asymptotic-behaviour aspect!), you can easily achieve O(1) by computing a table of all solutions (constant time independent of n, meaning O(1)) and then just indexing that table (also O(1)).

Often, in Big-O analysis, we assume things like addition to be O(1) operations, and we can do so if our algorithm's runtime already explodes within the limits of e.g. 32-bit integers, so reasoning about bigger integers can be intentionally ignored. But with O(logn) there's no explosion within 32 bits.

Practical aspects

Typically, we're not interested in some theoretical results, but want to get an estimate how long our algorithm will take, given some expected input-value range. And if Big-O tells us results like O(2^n), we know we'll have a problem if n can grow beyond 20 or so.

But with results like O(logn), the dominating thing typically isn't the asymptotic growth with higher n values, but the actual number of computational steps necessary. So, optimzing the Big-O complexity no longer gives a real-world improvement (see my O(1) "solution" above - nobody would do it that way).

In the real world

  • Use a plausible, understandable algorithm with acceptable time and space complexity.
  • Test your application's runtime behaviour over the relevant input ranges against the requirements.
  • If you see performance problems, use a profiling tool to find the cause.
  • Optimize exactly the hot spots you have seen, maybe by doing some local improvements, maybe by using a more advanced algorithm.

Back to the question

An improvent to the algorithms proposed in the other post might be to precompute partial-result tables for e.g. 8-bit chunks of the number.

There have to be two tables, one for the "basic" case, and one for the case that a +1 operation from the previous chunk resulted in a carry into this chunk.

The tables have to provide two pieces of information:

  • the number of steps needed for this chunk,
  • a flag whether a positive carry happens to the next chunk.

Then you can iterate over the 8-bit chunks from low to high, add the steps-needed numbers, and use the carry-flag to select the appropriate next table.

This is still O(logn) but might be faster than the algorithms proposed so far.

Ralf Kleberhoff
  • 6,990
  • 1
  • 13
  • 7
0

Yes if you only want to know the number of operations there should be a way to do this without actually doing all the calculations. This would be much faster for very big numbers. Something like that:

public int countOps(String bits) { // bit string with only 1 and 0 and first bit is 1
    int count = 0, i = bits.length - 1, b = bits.charAt(i) - '0';
    for (;;) {
        // remove 0s with /2
        while (b == 0) {
            count++;
            b = bits.charAt(--i) - '0';
        }
        // lowest bit b is now 1
        if (i < 1)
             return count;
        if (i == 1) // the 3 edge case
             return count + 1;
        int a = bits.charAt(i - 1) - '0';
        if (a == 0) { // have to do -1
            i -= 2;
            count += 3; // -1, /2, /2
            if (i < 1)
                return count;
            b = bits.charAt(i) - '0';
        } else { // have to do +1 and propagate
            count += 2; // +1, /2 (the /2 at the end of the propagation)
            while (a == 1) {
                count++; // /2
                if (--i == 0)
                    return count;
                a = bits.charAt(i - 1) - '0';
            }
            i--;
        }
    }
}
maraca
  • 8,468
  • 3
  • 23
  • 45
0

The accepted answer of the proposed duplicate question shows an algorithm which determines the best operation given the least significant bits of the input number.

However, this comes out to be something like O(log n). Is there an O(1) solution to this problem? If not, is there a better solution that just running through the actual operations?

To optimize that solution we can directly calculate the number of operations needed for every bit pattern and then remove those bits.

Note that we have to the change the higher bits only when an addition is performed, otherwise only a bit shift is needed.

Also note that the following C++ code returns 1, when the input is 0, contrary to the mentioned answer, which doesn't consider that corner case.

size_t steps_count(size_t n)
{
    size_t count{};
    while (n > 3) {
        switch ( n % 4 ) {
            case 0b00:
                count += 2;
                n >>= 2;
                break;
            case 0b01:
                count += 3;
                n >>= 2;
                break;
            case 0b10:
                ++count;
                n >>= 1;
                break;
            case 0b11:
                count += 3;
                n >>= 2;
                ++n;
                break;
        }
    }
    switch ( n ) {
        case 1:
            return count;
        case 3:
            return count + 2;
        default:
            return count + 1;
    }
}

This approach could be generalized processing more bits at a time and also introducing a look-up table. Whether this is worth the effort is left to the reader to determine.

Bob__
  • 12,361
  • 3
  • 28
  • 42