1

Possible Duplicate:
Plain English explanation of Big O

The wikipedia article on logarithmic growth is a stub. Many of the answers I read on stackoverflow give clarification of how efficient a process or function is based on a logarithmic function using 0 (I assume[see below] it is a 0[zero] and not an O[letter as M,N,O,P,Q], but please correct my assumption if it is wrong) and an n or N.

Can someone explain the logarithmic explanations pertaining to the common computational explanations better? maybe in terms of time in seconds(milliseconds are also welcome, just trying to conceptualize it in real life time differences...), terms of size, and/or in terms of weight?

I have seen the following frequently: (please feel free to include other ones as well)

  • O(1)
  • O(N)

My assumption is based that a 0 [zero] outside of a code block does not have a slash through it, while inside a code block a 0 does have a slash through it.

Community
  • 1
  • 1
chris Frisina
  • 19,086
  • 22
  • 87
  • 167

1 Answers1

1

What this means is that the execution time (or other resource) is some function of the amount of data. Let's say it takes you 5 minutes to blow up 10 baloons. If the function is O(1), then blowing up 50 baloons also takes 5 minutes. If it is O(n), then blowing up 50 baloons will take 25 minutes.

O(log n) means that as things scale up, managing larger n becomes easier (it follows logorithmic growth). Let's say you want to find an address in a directory of f(n) items. Suppose it takes 6.64 seconds to search a directory of 100 entries. [6.64 = log_2(100)] Then it might only take 7.64 seconds to search 200 entries. That is logorithmic growth.

Kevin A. Naudé
  • 3,992
  • 19
  • 20