71

So given the following program:


Is the time complexity of this program O(0)? In other words, is 0 O(0)?

I thought answering this in a separate question would shed some light on this question.

EDIT: Lots of good answers here! We all agree that 0 is O(1). The question is, is 0 O(0) as well?

Community
  • 1
  • 1
jyoungdev
  • 2,674
  • 4
  • 26
  • 36
  • 27
    @abelenky fell for it. ;) – Jon Purdy Jul 09 '10 at 01:51
  • I disagree that it sheds any light on the question. A program that runs in constant time for any input is not the same as asking what complexity of a program that tends towards 0 as the input tends towards infinity. – Eric Jul 09 '10 at 04:56
  • 1
    @abelenky The program is there; it's just so small you can't see it. – jyoungdev Jul 09 '10 at 11:17
  • 1
    God, this is a question of paramount importance to the world! – Andreas Rejbrand Jul 09 '10 at 17:24
  • @Rejbrand: Sometimes it's good to understand why the world spins the way it does. – Michael Foukarakis Jul 09 '10 at 17:52
  • 2
    @Michael Foukarakis: Of course; I'm a physicist. But this seems more to be a question of definitions, and a rather trivial case as well. – Andreas Rejbrand Jul 09 '10 at 18:13
  • in some languages, writing no code does not necessarily imply it takes no time to execute. Some compilers may still insert some NOP instructions in the middle due to whatever technical reason (including padding). So in practice, you cannot be certain that a segment of code is O(0) rather than O(1). – SOFe Jan 23 '18 at 05:00

14 Answers14

52

From Wikipedia:

A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.

From this description, since the empty algorithm requires 0 time to execute, it has an upper bound performance of O(0). This means, it's also O(1), which happens to be a larger upper bound.

Edit:

More formally from CLR (1ed, pg 26):

For a given function g(n), we denote O(g(n)) the set of functions

O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all nn0 }

The asymptotic time performance of the empty algorithm, executing in 0 time regardless of the input, is therefore a member of O(0).

Edit 2:

We all agree that 0 is O(1). The question is, is 0 O(0) as well?

Based on the definitions, I say yes.

Furthermore, I think there's a bit more significance to the question than many answers indicate. By itself the empty algorithm is probably meaningless. However, whenever a non-trivial algorithm is specified, the empty algorithm could be thought of as lying between consecutive steps of the algorithm being specified as well as before and after the algorithm steps. It's nice to know that "nothingness" does not impact the algorithm's asymptotic time performance.

Edit 3:

Adam Crume makes the following claim:

For any function f(x), f(x) is in O(f(x)).

Proof: let S be a subset of R and T be a subset of R* (the non-negative real numbers) and let f(x):S ->T and c ≥ 1. Then 0 ≤ f(x) ≤ f(x) which leads to 0 ≤ f(x) ≤ cf(x) for all x∈S. Therefore f(x) ∈ O(f(x)).

Specifically, if f(x) = 0 then f(x) ∈ O(0).

Community
  • 1
  • 1
andand
  • 17,134
  • 11
  • 53
  • 79
  • 50
    I just hope no one edits wikipedia to cite this post. Potential stack overflow... – Tim Goodman Jul 09 '10 at 01:17
  • Wouldn't it rather be a hijacked EIP? – codekaizen Jul 09 '10 at 01:25
  • 4
    I have no problem with O(1) and O(2) being "the same", but O(0) feels a little special to me; I would tread the water carefully here. – Hamish Grubijan Jul 09 '10 at 03:52
  • 1
    @Hamish Grubijan The problem is, that we're used to the special properties of 0 regarding multiplication. In fact O(1) is really only a representative for O(c) the whole class of functions running in CONSTANT time, regardless of the actual time span. 0 is a constant timespan. Therefore O(0) = O(1) = ... = O(c) for any natural number c. – Dave O. Jul 09 '10 at 07:21
  • 7
    @Dave: O(0) may be a member of O(1), but the reverse is not true. Therefore O(0) != O(1). – andand Jul 09 '10 at 17:25
  • 7
    @andand O(0) is a proper subset, not a member, of O(1). – Stewart Jul 12 '10 at 22:11
  • @Stewart: Yes. I should have been more careful. 0 is a member of O(1) while O(0) is a subset of O(1). – andand Jul 13 '10 at 00:31
44

It takes the same amount of time to run regardless of the input, therefore it is O(1) by definition.

Ignacio Vazquez-Abrams
  • 776,304
  • 153
  • 1,341
  • 1,358
10

Several answers say that the complexity is O(1) because the time is a constant and the time is bounded by the product of some coefficient and 1. Well, it is true that the time is a constant and it is bounded that way, but that doesn't mean that the best answer is O(1).

Consider an algorithm that runs in linear time. It is ordinarily designated as O(n) but let's play devil's advocate. The time is bounded by the product of some coefficient and n^2. If we consider O(n^2) to be a set, the set of all algorithms whose complexity is small enough, then linear algorithms are in that set. But it doesn't mean that the best answer is O(n^2).

The empty algorithm is in O(n^2) and in O(n) and in O(1) and in O(0). I vote for O(0).

Windows programmer
  • 7,871
  • 1
  • 22
  • 23
9

I have a very simple argument for the empty algorithm being O(0): For any function f(x), f(x) is in O(f(x)). Simply let f(x)=0, and we have that 0 (the runtime of the empty algorithm) is in O(0).

On a side note, I hate it when people write f(x) = O(g(x)), when it should be f(x) ∈ O(g(x)).

Adam Crume
  • 15,614
  • 8
  • 46
  • 50
  • I think we all know that 'is' == '=' == '∈', in this context. They're used interchangeably since as long as I can recall. – Michael Foukarakis Jul 09 '10 at 18:21
  • 8
    Nothing personal, but "it's always been this way" is one of my least favorite arguments. (I heard it all the time at a previous employer using 70s era technology, but I digress.) I'm a mathematician at heart, and correctness is what matters. – Adam Crume Jul 09 '10 at 18:38
  • 3
    I agree with you. But interestingly, although formally O(g(x)) is defined as a class of functions (so subset notation would be correct), *informally* everyone thinks of O(g(x)) not as a class but as meaning "some function such that…". And the "=" is not a symmetric equality sign, but just means "is" (Knuth explicitly points this out, along with "Aristotle is a man, but a man isn't necessarily Aristotle"). And when used in analysis, in expressions like e^x = 1 + x + O(x^2) (near 0), the addition is *defined* as a set again, but the informal perspective shines through. :-) – ShreevatsaR Jul 10 '10 at 18:24
  • 5
    Using = for f(x) = O(g(x)) caused me much unnecessary confusion in college, especially since I don't remember any instructors or textbooks ever letting us know what = means in this context. – jyoungdev Jul 11 '10 at 15:00
7

Big O is asymptotic notation. To use big O, you need a function - in other words, the expression must be parametrized by n, even if n is not used. It makes no sense to say that the number 5 is O(n), it's the constant function f(n) = 5 that is O(n).

So, to analyze time complexity in terms of big O you need a function of n. Your algorithm always makes arguably 0 steps, but without a varying parameter talking about asymptotic behaviour makes no sense. Assume that your algorithm is parametrized by n. Only now you may use asymptotic notation. It makes no sense to say that it is O(n2), or even O(1), if you don't specify what is n (or the variable hidden in O(1))!

As soon as you settle on the number of steps, it's a matter of the definition of big O: the function f(n) = 0 is O(0).

Since this is a low-level question it depends on the model of computation. Under "idealistic" assumptions, it is possible you don't do anything. But in Python, you cannot say def f(x):, but only def f(x): pass. If you assume that every instruction, even pass (NOP), takes time, then the complexity is f(n) = c for some constant c, and unless c != 0 you can only say that f is O(1), not O(0).

It's worth noting big O by itself does not have anything to do with algorithms. For example, you may say sin x = x + O(x3) when discussing Taylor expansion. Also, O(1) does not mean constant, it means bounded by constant.

sdcvvc
  • 25,343
  • 4
  • 66
  • 102
  • You could make your algorithm a function that takes a variable-length array as a dummy parameter. If you're talking about an entire program being O(0), you could consider it to be parameterised by its command line arguments. – Stewart Jun 29 '12 at 22:12
  • I agree with your point on NOP even a C `void f(void) {}` takes time because it has an implicit `return` and it has to be called (= stack manipulation takes place to push return address and jump to code) which also takes time. @sdcvvc Do we consider this stack bookkeeping part of the time to execute an algorithm? I think so. At the same time if we consider a truly empty algorithm (not even an assembly level NOP, i.e. no instructions at all, like the OP asked) that's `O(0)`. – TWiStErRob Dec 08 '15 at 11:34
5

All of the answers so far address the question as if there is a right and a wrong answer. But there isn't. The question is a matter of definition. Usually in complexity theory the time cost is an integer --- although that too is just a definition. You're free to say that the empty algorithm that quits immediately takes 0 time steps or 1 time step. It's an abstract question because time complexity is an abstract definition. In the real world, you don't even have time steps, you have continuous physical time; it may be true that one CPU has clock cycles, but a parallel computer could easily have asynchronoous clocks and in any case a clock cycle is extremely small.

That said, I would say that it's more reasonable to say that the halt operation takes 1 time step rather than that it takes 0 time steps. It does seem more realistic. For many situations it's arguably very conservative, because the overhead of initialization is typically far greater than executing one arithmetic or logical operation. Giving the empty algorithm 0 time steps would only be reasonable to model, for example, a function call that is deleted by an optimizing compiler that knows that the function won't do anything.

Greg Kuperberg
  • 3,995
  • 22
  • 24
  • 1
    Good points. My intent with the program given is that it takes absolutely 0 time. Doesn't exist in real life except, maybe, an empty inline function that a compiler would optimize away altogether. – jyoungdev Jul 09 '10 at 11:23
  • In the real world computers do have time steps, they are called clock cycles. – starblue Jul 09 '10 at 13:05
  • starblue, you're right that one CPU has clock cycles. But if you have a parallel computer with several CPUs, their clocks could be asynchronous. In any case the clock cycle time is extremely small. I'll edit the question to clarify. – Greg Kuperberg Jul 09 '10 at 15:22
2

It should be O(1). The coefficient is always 1.

Consider:

If something grows like 5n, you don't say O(5n), you say O(n) [in other words, O(1n)]

If something grows like 7n^2, you don't say O(7n^2), you say O(n^2) [in other words, O(1n^2)]

Likewise you should say O(1), not O(some other constant)

Tim Goodman
  • 23,308
  • 7
  • 64
  • 83
  • The examples are for illustration purposes only, not intended as a formal proof of anything. Technically you could say that something O(n) is also O(5n) and still meet the strict definition, but the *convention* is to say O(n). I think it's better to be consistent. – Tim Goodman Jul 10 '10 at 00:54
  • 3
    Anything that's O(5n) is also O(n) *and vice versa*, but not everything that's O(1) is O(0) — only the zero function is. So that convention isn't perfectly relevant here. – ShreevatsaR Jul 10 '10 at 14:06
  • 3
    You could equally argue that the empty algorithm grows like 0n^2, and therefore you should say it's O(n^2). Which is true, but equally beside the point. – Stewart Jul 12 '10 at 22:22
2

There is no such thing as O(0). Even an oracle machine or a hypercomputer require the time for one operation, i.e. solve(the_goldbach_conjecture), ergo:

All machines, theoretical or real, finite or infinite produce algorithms with a minimum time complexity of O(1).

But then again, this code right here is O(0):

// Hello world!

:)

David Titarenco
  • 32,662
  • 13
  • 66
  • 111
  • Unless somebody discovers a way to transmit information faster than the speed of light, O(1) will be the lower boundary of any operation. – Steve Zelaznik Sep 08 '15 at 02:40
2

I would say it's O(1) by definition, but O(0) if you want to get technical about it: since O(k1g(n)) is equivalent to O(k2g(n)) for any constants k1 and k2, it follows that O(1 * 1) is equivalent to O(0 * 1), and therefore O(0) is equivalent to O(1).

However, the empty algorithm is not like, for example, the identity function, whose definition is something like "return your input". The empty algorithm is more like an empty statement, or whatever happens between two statements. Its definition is "do absolutely nothing with your input", presumably without even the implied overhead of simply having input.

Consequently, the complexity of the empty algorithm is unique in that O(0) has a complexity of zero times whatever function strikes your fancy, or simply zero. It follows that since the whole business is so wacky, and since O(0) doesn't already mean something useful, and since it's slightly ridiculous to even discuss such things, a reasonable special case for O(0) is something like this:

The complexity of the empty algorithm is O(0) in time and space. An algorithm with time complexity O(0) is equivalent to the empty algorithm.

So there you go.

Jon Purdy
  • 53,300
  • 8
  • 96
  • 166
2

Given the formal definition of Big O:

Let f(x) and g(x) be two functions defined over the set of real numbers. Then, we write:

f(x) = O(g(x)) as x approaches infinity iff there exists a real M and a real x0 so that:

|f(x)| <= M * |g(x)| for every x > x0

As I see it, if we substitute g(x) = 0 (in order to have a program with complexity O(0)), we must have:

|f(x)| <= 0, for every x > x0 (the constraint of existence of a real M and x0 is practically lifted here)

which can only be true when f(x) = 0.

So I would say that not only the empty program is O(0), but it is the only one for which that holds. Intuitively, this should've been true since O(1) encompasses all algorithms that require a constant number of steps regardless of the size of its task, including 0. It's essentially useless to talk about O(0); it's already in O(1). I suspect it's purely out of simplicity of definition that we use O(1), where it could as well be O(c) or something similar.

Michael Foukarakis
  • 39,737
  • 6
  • 87
  • 123
1

0 = O(f) for all function f, since 0 <= |f|, so it is also O(0).

Alexandre C.
  • 55,948
  • 11
  • 128
  • 197
0

No. It's O(c) by convention whenever you don't have dependence on input size, where c is any positive constant (typically 1 is used - O(1) = O(12.37)).

Mau
  • 14,234
  • 2
  • 31
  • 52
  • I *hate* when people start a new sentence with a "Where" that should have been a part of the preveous sentence. – Andreas Rejbrand Jul 09 '10 at 17:28
  • 1
    O(O(0)) is not even a meaningful construct. O(0) is a set of functions, not a function. – Adam Crume Jul 09 '10 at 17:55
  • Except that the convention applies when there is a strictly positive time required to perform the function. That's not the case when absolutely no time is required to for the algorithm, as is the case when there's nothing to do (i.e. the empty algorithm). – andand Jul 12 '10 at 15:40
  • 1
    The fact is, there can't be an empty program (though there can be an empty algorithm). – Mau Jul 12 '10 at 15:44
0

Not only is this a perfectly sensible question, but it is important in certain situations involving amortized analysis, especially when "cost" means something other than "time" (for example, "atomic instructions").

Let's say there is a datastructure featuring multiple operation types, for which an amortized analysis is being conducted. It could well happen that one type of operation can always be funded fully using "coins" deposited during previous operations.

There is a simple example of this: the "multipop queue" described in Cormen, Leiserson, Rivest, Stein [CLRS09, 17.2, p. 457], and also on Wikipedia. Each time an item is pushed, a coin is put on the item, for a total amortized cost of 2. When (multi) pops occur, they can be fully paid for by taking one coin from each item popped, so the amortized cost of MULTIPOP(k) is O(0). To wit:

Note that the amortized cost of MULTIPOP is a constant (0) ... Moreover, we can also charge MULTIPOP operations nothing. To pop the first plate, we take the dollar of credit off the plate and use it to pay the actual cost of a POP operation. To pop a second plate, we again have a dollar of credit on the plate to pay for the POP operation, and so on. Thus, we have always charged enough up front to pay for MULTIPOP operations. In other words, since each plate on the stack has 1 dollar of credit on it, and the stack always has a nonnegative number of plates, we have ensured that the amount of credit is always nonnegative.

Thus O(0) is an important "complexity class" for certain amortized operations.

BD107
  • 159
  • 8
  • Side note: unwinding definitions, $f(x) \in O(0)$ if and only if there exists some X for which $f(x) \leq 0$ so long as $x \geq X$. This is a perfectly reasonable definition. – BD107 Jun 16 '20 at 22:55
-1

O(1) means the algorithm's time complexity is always constant.

Let's say we have this algorithm (in C):

void doSomething(int[] n)
{
  int x = n[0]; // This line is accessing an array position, so it is time consuming.
  int y = n[1]; // Same here.
  return x + y;
}

I am ignoring the fact that the array could have less than 2 positions, just to keep it simple.

If we count the 2 most expensive lines, we have a total time of 2.

2 = O(1), because:

2 <= c * 1, if c = 2, for every n > 1

If we have this code:

public void doNothing(){}

And we count it as having 0 expansive lines, there is no difference in saying it has O(0) O(1), or O(1000), because for every one of these functions, we can prove the same theorem.

Normally, if the algorithm takes a constant number of steps to complete, we say it has O(1) time complexity.

I guess this is just a convention, because you could use any constant number to represent the function inside the O().

mverardo
  • 475
  • 3
  • 9
  • The OP noted that nobody disputes that the empty algorithm is O(1). He wants to know whether it is also O(0). – andand Jul 09 '10 at 19:53