2

I have started reading C++ STL and also found a book for that!. while i was reading the complexity,which plays major role in choosing algorithms and data structures i have been seeing that the Big Oh notation was only used with different variables(O(n),O(log(n).,) and by further surfing i found the Big Oh denotes f(x) = O(g(x))--(big-oh) means that the growth rate of f(x) is asymptotically less than or equal to to the growth rate of g(x)

So my question is, if an algorithms time complexity always results equal to the growth of g(x), why we mention that complexity as f(x)=O(n)[Big oh of n] rather than using (theta), because when i read about (theta) that said f(x) = Θ(g(x)) (theta) means that the growth rate of f(x) is asymptotically equal to the growth rate of g(x)

Here the notation possibly be (theta) instead of O(N) wasn't it? or any reason for using big oh.

And what notation should we use to measure the space complexity.i don't see anything talks about space complexity regard STL in that book.

reference: What is the difference between Θ(n) and O(n)?

Community
  • 1
  • 1
RaGa__M
  • 2,550
  • 1
  • 23
  • 44

3 Answers3

9

why we mention that complexity as f(x)=O(n)[Big oh of n] rather than using (theta)

Theta might be useful in describing how a particular algorithm behaves. But STL - or the C++ standard library in general - isn't a single implementation, so you can't describe how it behaves.

The description of STL is a set of requirements on how the algorithms chosen by an implementation must behave. The complexities are part of those requirements. It makes no sense to require that the complexity of an argument must be at least something. Only the upper limit is relevant. Therefore, the Big-O used.

And what notation should we use to measure the space complexity

Big-O notation can be used for space complexity as well.

eerorika
  • 232,697
  • 12
  • 197
  • 326
  • I agree the point "upper limit is relevant", but after all based on the link i mentioned, Big oh - [is asymptotically less than or equal to the growth rate of g(x)], but when looking into the link if we need "upper limit "we ought to go with" Omega"-[(big-omega) means that the growth rate of f(x) is asymptotically greater than or equal to the growth rate of g(x)] ? – RaGa__M Feb 02 '16 at 14:38
  • @RichardGeorge no, Omega would be specifically a lower bound. If the growth rate of f(x) is asymptotically greater than equal to g(x), then g(x) cannot possibly be an upper limit for f(x). – eerorika Feb 02 '16 at 14:42
  • Do we achieve upper bound in both O(n^2) and O(n * log (n)) case ? – RaGa__M Feb 02 '16 at 14:49
  • What do you mean by "*achieve upper bound*"? – eerorika Feb 02 '16 at 15:02
  • I meant g(x)-is g(x) an upper limit in the above commented case. – RaGa__M Feb 02 '16 at 15:42
  • @RichardGeorge No, O(g(x)) is the upper limit. – eerorika Feb 02 '16 at 17:44
1

So my question is, if an algorithms time complexity always results equal to the growth of g(x), why...

This is not true. For instance the complexity of the sort is Big-oh n*log(n) but may come down to linear in some cases.It is rare case that we can give theta approximation for an algorithm.

Also usually the standard gives big-oh guarantees for the functions, but I haven't seen any theta restrictions there.

And what notation should we use to measure the space complexity.

Big-oh is a notation for function growth and can be used both for space and time complexity.

Ivaylo Strandjev
  • 69,226
  • 18
  • 123
  • 176
  • Hi Ivaylo,were you saying that no algorithm could always result in equal to g(x)?. the thing is we have O(n)-[The runtime grows linearly (with the same factor) as the number of elements grows], in this case why we can't use (theta) Θ(g(x))? – RaGa__M Feb 02 '16 at 14:25
  • Of course I am not saying that no algorithm can have a theta approximation. I say that it is a rare case. Also I say that the standard does not use theta, instead it uses big-oh – Ivaylo Strandjev Feb 02 '16 at 14:27
  • So its all about standards? – RaGa__M Feb 02 '16 at 14:28
  • @RichardGeorge I make a few points in my answer and this is one of them – Ivaylo Strandjev Feb 02 '16 at 14:29
0

One reason is that if an implementer came up with a algorithm that was (for example) linear rather than O(n log n), they would not be allowed to use it if the function was specified as Θ(n log n).