Talking about Big O notations, if one algorithm time complexity is O(N) and other's is O(2N), which one is faster?
-
14There's no such thing as `O(2N)`. – Chris Hayes Sep 11 '14 at 01:43
-
@ChrisHayes If the algo makes two passes over same input data, isn't it O(2N) – deepdive Sep 11 '14 at 01:45
-
2 * O(N), you should not put 2 inside O(..) – Eric Sep 11 '14 at 01:46
-
4No, because constant multiples have no meaning in Big O notation. You need to [do some more reading](https://en.wikipedia.org/wiki/Big_O_notation#Multiplication_by_a_constant). – Chris Hayes Sep 11 '14 at 01:46
-
28There is actually a lot of misinformation being provided in these comments. O(f(n)) is well defined for f(n) = 2n. It is absolutely valid to write O(2n). However, O(n) is the *canonical* form, where O(n) = O(cn) for any fixed c > 0. Similarly, O(n + log n) is a valid set, even though O(n + log n) = O(n). – Timothy Shields Sep 11 '14 at 01:47
-
@ChrisHayes Hmm. But what about fly bird's comment. Is 2*O(N) different from O(N)? – deepdive Sep 11 '14 at 01:47
-
2@TimothyShields It's valid, but it doesn't have the implications that most people would imagine, and it's a poor habit to get into. People always think "an `O(2N)` algorithm will take twice as long as an `O(N)` algorithm", which is not true. – Chris Hayes Sep 11 '14 at 01:49
-
4@ChrisHayes Then it's *that* thinking that needs to be addressed. – user2864740 Sep 11 '14 at 01:49
-
1@TimothyShields What about O(N) vs 2*O(N)? – deepdive Sep 11 '14 at 01:53
-
1@deepdive The notation "2 * O(n)" is not valid. O(f(n)) is defined to be a set of functions, and you don't multiply a set of functions by 2. – Timothy Shields Sep 11 '14 at 01:54
-
6@ChrisHayes The problem is that people often think O(f(n)) is a function. It's not - it's a set of functions. To write something like 4n + 5 = O(n) is an abuse of notation. The correct notation is 4n + 5 ϵ O(n). – Timothy Shields Sep 11 '14 at 02:02
-
2In theory, O(2N) is defined but it is equivalent to O(N). – mostruash Sep 11 '14 at 02:30
-
1@mostruash Can you provide any link to the theory because most the commentators here disagree on that notion. – deepdive Sep 11 '14 at 03:25
-
@deepdive Check Timothy Shields' answer. – mostruash Sep 11 '14 at 03:26
-
2@TimothyShields I have no problem in deciding that the product of two sets as the appropriate set that includes all products from the two sets. So, for instance, `O(n) * O(2^n) = O(n 2^n)` makes perfect sense to me. Using an = sign there instead of a subset sign is a minor irritation, but that is customary notation. As Knuth says, "mathematicians customarily use the = sign as they use the word 'is' in English: Aristotle is a man, but a man isn't necessarily Aristotle." – btilly Sep 11 '14 at 05:59
6 Answers
The definition of big O is:
O(f(n)) = { g | there exist N and c > 0 such that g(n) < c * f(n) for all n > N }
In English, O(f(n)) is the set of all functions that have an eventual growth rate less than or equal to that of f.
So O(n) = O(2n). Neither is "faster" than the other in terms of asymptotic complexity. They represent the same growth rates - namely, the "linear" growth rate.
Proof:
O(n) is a subset of O(2n): Let g be a function in O(n). Then there are N and c > 0 such that g(n) < c * n for all n > N. So g(n) < (c / 2) * 2n for all n > N. Thus g is in O(2n).
O(2n) is a subset of O(n): Let g be a function in O(2n). Then there are N and c > 0 such that g(n) < c * 2n for all n > N. So g(n) < 2c * n for all n > N. Thus g is in O(n).
Typically, when people refer to an asymptotic complexity ("big O"), they refer to the canonical forms. For example:
- logarithmic: O(log n)
- linear: O(n)
- linearithmic: O(n log n)
- quadratic: O(n2)
- exponential: O(cn) for some fixed c > 1
(Here's a fuller list: Table of common time complexities)
So usually you would write O(n), not O(2n); O(n log n), not O(3 n log n + 15 n + 5 log n).

- 1
- 1

- 75,459
- 18
- 120
- 173
-
2your formal proof is very good and I understand why O(n) = O(2n) mathematically but can you please help me in understanding it logically. Like how can the functions y = x and y = 2x have the same rate of growth. – Darshan Aug 11 '19 at 17:10
-
5@darshan a helpful way is the visualise the time complexity by plotting it on a 2D chart. Imagine your O(N) algorithm starts at 1, then 2, 3, 4, ... N. Now imagine your O(2N) algorithm, it's 2 * N so it will start at 2, then 4, 6, 8, ... N - we can see that the rate of growth is directly proportional to N. It's key to bear in mind we're talking about complexity rather than speed. We want to find out how our algorithm grows - is it in a linear fashion, quadratic, etc? – JmJ Oct 29 '20 at 21:21
-
When you say `g(n) < (c / 2) * 2n for all n > N`, how do you know that? And why are you dividing c by 2? – David G Mar 02 '22 at 18:22
-
There is another one in this list, which is the O(n!). It is the worst performance for an algorithm, according with Big O graph. – Fabiano Jul 24 '22 at 00:17
-
@JmJ The growth rate is proportional to N, but O(2N) === O(N) and 2*N isn't an index that cuts out half the inputs, 2N would mean twice the inputs or twice the complexity, which still reduces to just linear complexity O(N). – None Aug 02 '22 at 23:35
-
1@Fabiano: There are worse performers than O(n!). "Deciding the truth of a given statement in Presburger arithmetic" takes [2-EXPTIME](https://en.wikipedia.org/wiki/2-EXPTIME), which is slower. There is no "maximum" complexity. – Mooing Duck Jul 18 '23 at 15:09
-
Thanks for that, @MooingDuck. Thanks for introducing me to this one, which I wasn't aware. :) Best for you! – Fabiano Jul 19 '23 at 00:32
Timothy Shield's answer is absolutely correct, that O(n) and O(2n) refer to the same set of functions, and so one is not "faster" than the other. It's important to note, though, that faster isn't a great term to apply here.
Wikipedia's article on "Big O notation" uses the term "slower-growing" where you might have used "faster", which is better practice. These algorithms are defined by how they grow as n increases.
One could easily imagine a O(n^2) function that is faster than O(n) in practice, particularly when n is small or if the O(n) function requires a complex transformation. The notation indicates that for twice as much input, one can expect the O(n^2) function to take roughly 4 times as long as it had before, where the O(n) function would take roughly twice as long as it had before.

- 1
- 1

- 90,959
- 16
- 217
- 251
It depends on the constants hidden by the asymptotic notation. For example, an algorithm that takes 3n + 5
steps is in the class O(n)
. So is an algorithm that takes 2 + n/1000
steps. But 2n
is less than 3n + 5
and more than 2 + n/1000
...
It's a bit like asking if 5
is less than some unspecified number between 1 and 10. It depends on the unspecified number. Just knowing that an algorithm runs in O(n)
steps is not enough information to decide if an algorithm that takes 2n
steps will complete faster or not.
Actually, it's even worse than that: you're asking if some unspecified number between 1 and 10 is larger than some other unspecified number between 1 and 10. The sets you pick from being the same doesn't mean the numbers you happen to pick will be equal! O(n)
and O(2n)
are sets of algorithms, and because the definition of Big-O cancels out multiplicative factors they are the same set. Individual members of the sets may be faster or slower than other members, but the sets are the same.

- 17,763
- 5
- 68
- 136
-
Can you explain why the analogy "you're asking if some unspecified number between 1 and 10 is larger than some other unspecified number between 1 and 10" is used but not "you're asking if some unspecified number between 1 and 10 is larger than some other unspecified number between 1 and 10 and multiply that number by 2"? – Mc Kevin Sep 11 '14 at 06:36
-
@McKevin I was trying to simplify the situation instead of being true to the problem. It makes the intuitive leap a bit smaller, and then the rest is easier. – Craig Gidney Sep 11 '14 at 17:58
Theoretically O(N) and O(2N) are the same.
But practically, O(N) will definitely have a shorter running time, but not significant. When N is large enough, the running time of both will be identical.

- 163
- 1
- 1
- 7
-
1O(2n) is conceptually O(n) because as n approaches infinity. O(2n) is blending in implementation details into the conceptual ones, see http://www.perlmonks.org/?node_id=227909. – radtek Apr 28 '15 at 04:54
-
3I disagree, when you're mentioning the *practical* performance of an algorithm, there is so much more to consider than the algorithmic complexity. One algorithm might jump around the systems memory much more, which would make it much slower, its not enough to say that a lower complexity will guarantee a shorter running time. – user2662833 Oct 15 '17 at 22:33
-
If we are writing O(2N) vs O(N) I would think we are talking about the practicalities of running the same implementation twice, or where did the 2 come from? – c z Mar 11 '20 at 10:40
-
@cz One might assume that by stringing together two O(n) operations, the result is O(2n), which at best is a misuse of syntax. Since we don't know the linear, sublinear, or constant factors within those two O(n) operations, it's safer not to make assumptions that an algorithm labeled O(2n) takes twice as long as an algorithm O(n), or indeed that it is practically faster or slower at all. We simply can't use O(f(n)) notation to reason about the absolute or relative runtimes of two O(n) functions except for the theoretical, asymptotic behavior as n approaches infinity. – Jeff Bowman Sep 08 '21 at 22:21
O(N) and O(2N) will show significant difference in growth for small numbers of N, But as N value increases O(N) will dominate the growth and coefficient 2 becomes insignificant. So we can say algorithm complexity as O(N).
Example:
Let's take this function
T(n) = 3n^2 + 8n + 2089
For n= 1 or 2, the constant 2089 seems to be the dominant part of function but for larger values of n, we can ignore the constants and 8n and can just concentrate on 3n^2 as it will contribute more to the growth, If the n value still increases the coefficient 3 also seems insignificant and we can say complexity is O(n^2).
For detailed explanation refer here

- 141
- 7
O(n) is faster however you need to understand that when we talk about Big O, we are measuring the complexity of a function/algorithm, not its speed. And we measure this complexity asymptotically.
In layman's terms
, when we talk about asymptotic analysis, we take immensely huge values for n. So if you plot the graph for O(n) and O(2n), the values will stay in some particular range from each other for any value of n.
They are much closer compared to the other canonical forms like O(nlogn) or O(1)
, so by convention, we approximate the complexity to the canonical form O(n).

- 1,203
- 4
- 14
- 31

- 31
- 4