Big-O isn't everything, especially if you already have an O(logn) solution.
Theoretical reasoning
When talking about Big-O time complexity, the asymptotic behaviour for ever-increasing values is relevant. So, inherently we can't limit ourselves to some fixed-length integer computations.
As today there is no CPU that can do operations on unlimited-bits integers in constant time (and probably never will be), a serious O(1) is impossible in itself except for the very special cases where most of the bits are irrelevant (e.g. it's O(1) to decide whether a number is even, just looking at the least significant bit).
So, O(logn) is the best you can hope for.
If you limit the bit-length (thus violating the asymptotic-behaviour aspect!), you can easily achieve O(1) by computing a table of all solutions (constant time independent of n, meaning O(1)) and then just indexing that table (also O(1)).
Often, in Big-O analysis, we assume things like addition to be O(1) operations, and we can do so if our algorithm's runtime already explodes within the limits of e.g. 32-bit integers, so reasoning about bigger integers can be intentionally ignored. But with O(logn) there's no explosion within 32 bits.
Practical aspects
Typically, we're not interested in some theoretical results, but want to get an estimate how long our algorithm will take, given some expected input-value range. And if Big-O tells us results like O(2^n), we know we'll have a problem if n can grow beyond 20 or so.
But with results like O(logn), the dominating thing typically isn't the asymptotic growth with higher n values, but the actual number of computational steps necessary. So, optimzing the Big-O complexity no longer gives a real-world improvement (see my O(1) "solution" above - nobody would do it that way).
In the real world
- Use a plausible, understandable algorithm with acceptable time and space complexity.
- Test your application's runtime behaviour over the relevant input ranges against the requirements.
- If you see performance problems, use a profiling tool to find the cause.
- Optimize exactly the hot spots you have seen, maybe by doing some local improvements, maybe by using a more advanced algorithm.
Back to the question
An improvent to the algorithms proposed in the other post might be to precompute partial-result tables for e.g. 8-bit chunks of the number.
There have to be two tables, one for the "basic" case, and one for the case that a +1
operation from the previous chunk resulted in a carry into this chunk.
The tables have to provide two pieces of information:
- the number of steps needed for this chunk,
- a flag whether a positive carry happens to the next chunk.
Then you can iterate over the 8-bit chunks from low to high, add the steps-needed numbers, and use the carry-flag to select the appropriate next table.
This is still O(logn) but might be faster than the algorithms proposed so far.