73

I just came around this weird discovery, in normal maths, n*logn would be lesser than n, because log n is usually less than 1. So why is O(nlog(n)) greater than O(n)? (ie why is nlogn considered to take more time than n)

Does Big-O follow a different system?

Ritveak
  • 2,930
  • 2
  • 13
  • 28

11 Answers11

165

It turned out, I misunderstood Logn to be lesser than 1. As I asked few of my seniors i got to know this today itself, that if the value of n is large, (which it usually is, when we are considering Big O ie worst case), logn can be greater than 1.

So yeah, O(1) < O(logn) < O(n) < O(nlogn) holds true.

(I thought this to be a dumb question, and was about to delete it as well, but then realised, no question is dumb question and there might be others who get this confusion so I left it here.)

Ritveak
  • 2,930
  • 2
  • 13
  • 28
30

...because log n is always less than 1.

This is a faulty premise. In fact, logb n > 1 for all n > b. For example, log2 32 = 5.

Colloquially, you can think of log n as the number of digits in n. If n is an 8-digit number then log n ≈ 8. Logarithms are usually bigger than 1 for most values of n, because most numbers have multiple digits.

John Kugelman
  • 349,597
  • 67
  • 533
  • 578
27

Here is a graph of the popular time complexities \

O(n*log(n)) is clearly greater than O(n) for n>2 (log base 2)



An easy way to remember might be, taking two examples

  1. Imagine the binary search algorithm with is Log N time complexity: O(log(N))
  2. If, for each step of binary search, you had to iterate the array of N elements
  3. The time complexity of that task would be O(N*log(N))
  4. Which is more work than iterating the array once: O(N)

enter image description here

Jude Niroshan
  • 4,280
  • 8
  • 40
  • 62
Caleb Hillary
  • 525
  • 6
  • 10
22

Plot both the graph( on desmos (https://www.desmos.com/calculator) or any other web) and look yourself the result on large values of n ( y=f(n)). I am saying that you should look for large value because for small value of n the program will not have time issue. For convenience I had attached a graph below you can try for other base of log. enter image description here

The red represent time = n and blue represent time = nlog(n).

randomUser
  • 653
  • 6
  • 23
6

In computers, it's log base 2 and not base 10. So log(2) is 1 and log(n), where n>2, is a positive number which is greater than 1. Only in the case of log (1), we have the value less than 1, otherwise, it's greater than 1.

Amit
  • 229
  • 5
  • 20
5

Log(n) can be greater than 1 if n is greater than b. But this doesn't answer your question that why is O(n*logn) is greater than O(n).

Usually the base is less than 4. So for higher values n, n*log(n) becomes greater than n. And that is why O(nlogn) > O(n).

  • "Usually the base is always less than 4." The base is neither usually nor always less than 4. In an [octree]https://en.wikipedia.org/wiki/Octree), for example,the base is 8. I'll grant that the most common data structures you'll work with in Computer Science classes will have a base of 4 or less, but the real world is different. In any case, the base is usually much smaller than the number of items you're considering when you talk about algorithmic analysis, so you can assume that log(n) will be greater than 1, regardless of the base. – Jim Mischel Jun 08 '19 at 22:43
4

This graph may help. log (n) rises faster than n and is greater than 1 for n greater than logarithm's base. https://stackoverflow.com/a/7830804/11617347

sxpg
  • 41
  • 3
2

No matter how two functions behave on small value of n, they are compared against each other when n is large enough. Theoretically, there is an N such that for each given n > N, then nlogn >= n. If you choose N=10, nlogn is always greater than n.

agtabesh
  • 548
  • 3
  • 15
1

The assertion is not always accurate. When n is small, (n^2) requires more time than (log n), but when n is large, (log n) is more effective. The growth rate of (n^2) is less than (n) and (log n) for small values, so we can say that (n^2) is more efficient because it takes less time than (log n), but as n increases, (n^2) increases dramatically, whereas (log n) has a growth rate that is less than (n^2) and (n), so (log n) is more efficient.

0

For higher values of log n it becomes greater than 1. as we consider all possible values of n we can say that for most of the time log n is greater than 1. Hence we can say O(nlogn) > O(n) (Assuming higher values)

0

Remember "big O" is not about values, it is about the shape of the function I can have an O(n2) function that runs faster, even for values of n over a million than a O(1) function...

David V. Corbin
  • 344
  • 1
  • 10