3
 result = 0
 i = 0
 while i < 2**n:
      result = result + i
      i += 1
 # end while

I'm assuming O(2^n). Python code.

melpomene
  • 84,125
  • 8
  • 85
  • 148
Bob Marley
  • 39
  • 1
  • 2
    I'm not into Python; does `2**n` mean `2^n`? If so, the running time is `O(2^n)`. – Codor Dec 09 '16 at 09:13
  • yes, it is 2^n as @Codor says. if you are interested how it is computed, take a look at this - http://stackoverflow.com/a/4852666/4481312 – marmeladze Dec 09 '16 at 09:13
  • 1
    @melpomene: `2**n` requires one *shift*: `2**n` == `1 << n`; `n` doesn't change within the loop so we might expect `2**n` will *not* be recomputed – Dmitry Bychenko Dec 09 '16 at 09:32
  • @melpomene "Is Python smart enough to [optimize this]?" -> So, is it? You asserted that `2**n` was recomputed at every iteration, so you seem to know about this subject. Though I know that `while i < some_function(n)` would make `some_function(n)` be re-evaluated at every iteration, I am not sure it is the same with `2**n`, since it uses a built-in. I genuinely wonder if Python does this optimization. – Right leg Dec 09 '16 at 09:44
  • 3
    @melpomene That doesn't factor into the O notation. What if the code does a `sleep(1)` in every loop iteration? *That's just how long one loop takes.* That's not what O is expressing. O is expressing the *complexity*, not the absolute running time. – deceze Dec 09 '16 at 09:49
  • @deceze You're right. I wasn't thinking clearly. I retract all of my comments. – melpomene Dec 09 '16 at 10:36
  • 1
    For the record, in all existing versions of CPython, (1) `2**n` *is* recomputed on every iteration, and (2) it is not computed by doing `1 << n`. The latter would be [much faster](https://stackoverflow.com/a/60325865/12299000) for large n. – kaya3 Feb 28 '20 at 17:00

2 Answers2

1

I think your code's time complexity is O(2^n log n) because you are computing 2^n, for 2^n times.
a^b can compute in O(log b) for exponentiation by squaring and I think the exponential algorithm in python is O(log n) algorithm.
So, the time complexity is O(2^n log n).

square1001
  • 1,402
  • 11
  • 26
  • 2
    you're right that the exponentiation may take nontrivial time (perhaps it's logarithmic, quite possibly a constant time bit shift) but the overall complexity class is best served as basic as possible, in this case `O(2^n * log n) == O(2^n)`. We try not to care about small fish – mike3996 Dec 10 '16 at 10:26
  • @nperson325681 `O(2^n * log n)` is not `O(2^n)`. You don't get to throw away a non-constant *factor* just because the other factor is much bigger; apply the definition of big O, and you'll see there is no constant c such that `2^n * c > 2^n * log n` for sufficiently large n, because `log n` is not bounded by a constant. – kaya3 Feb 28 '20 at 16:55
  • By the way, computing 2^n takes at least O(n) time simply because the result requires at least n+1 bits to represent, and you can't output n+1 bits in under O(n) time. – kaya3 Feb 28 '20 at 16:55
1

The time complexity is O(2n × n).

The reason is that numbers of size 2O(n) take O(n) bits to represent as int objects. If the result of an arithmetic operation takes O(n) bits then it can't be done in less than O(n) time, because Python's int objects are immutable.

So:

  • The addition result = result + i takes O(n) time because result uses O(n) bits.
  • The addition i += 1 takes O(n) time, because i uses O(n) bits.
  • It also takes O(n) time to compute 2**n, for the same reason: the result of this operation uses O(n) bits. The exponentiation algorithm only does O(log n) multiplications, but the time is dominated by the last multiplication, which is like 2n/2 * 2n/2.
  • And of course, the comparison i < 2**n takes O(n) time because both numbers use O(n) bits.

So, the loop iterates O(2n) times, and it does O(n) work on each iteration, hence the result.

kaya3
  • 47,440
  • 4
  • 68
  • 97
  • Based on some benchmarks, I think `2**n` isn't O(n) but is *worse*. And it's not clear why 2^(n/2) * 2^(n/2) is O(n). It's a multiplication of two (n/2+1)-bit numbers. At least the naive way from school would take O(n^2) for that. – Kelly Bundy Feb 29 '20 at 00:01
  • @HeapOverflow I wasn't able to find a reference, so I benchmarked up to n = 10^10 and it seemed to be either O(n) or O(n log n) - the best-fit curve has a s.d. of about 5-6% either way, which is less than the error on the individual measurements - and I can't think of a good reason that O(n log n) should be right. It sure doesn't look like O(n^2). Here's up to 10^9 and 10^10 on Python 3.6.2 respectively: https://i.imgur.com/4gZRKXV.png https://i.imgur.com/BgT76Wk.png – kaya3 Feb 29 '20 at 01:07
  • 1
    I think multiplication [uses Karatsuba](https://github.com/python/cpython/blob/02673352b5db6ca4d3dc804965facbedfe66425d/Objects/longobject.c#L3657). And for *random* large ints I do observe Karatsuba's O(n^1.58). I guess the factor being a power of 2 happens to make that faster because the lower half of a factor is always zero. Not yet sure how much faster. – Kelly Bundy Feb 29 '20 at 02:26