5

I've read about algorithm run-time in some algorithm books, where it's expressed as, O(n). For eg., the given code would run in O(n) time for the best case & O(n3) for the worst case. What does it mean & how does one calculate it for their own code? Is it like linear time , and is it like each predefined library function has their own run-time which should be kept in mind before calling it? Thanks...

  • Possible duplicate: http://stackoverflow.com/questions/3255/big-o-how-do-you-calculate-approximate-it – River Aug 04 '12 at 05:19

3 Answers3

10

A Beginner's Guide to Big O Notation might be a good place to start:

http://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/

also take a look at Wikipedia

http://en.wikipedia.org/wiki/Big_O_notation

there are several related questions and good answers on stackoverflow

What is a plain English explanation of "Big O" notation?

and

Big-O for Eight Year Olds?

Community
  • 1
  • 1
amdn
  • 11,314
  • 33
  • 45
0

Should't this be in math?

If you are trying to sort with bubble sort array, that is already sorted, then you can check, if this move along array checked anything. If not, all okey -- we done.

Than, for best case you will have O(n) compraisons(n-1, to be exact), for worst case(array is reversed) you will have O(n^2) compraisons(n(n-1)/2, to be exact).

More complicated example. Let's find maximum element of array. Obvilously, you will always do n-1 compraisons, but how many assignments on average? Complicated math answers: H(n) -1.

Usually, It is easy to Your Answerget best and worst scenarios, but average require a lot of math.

I would suggest you read Knuth, Volume 1. But who would not?

And, formal definition:

f(n)∈O(g(n)) means exist n∈N: for all m>n f(m)

In fact, you must read O-notation about on wiki.

KAction
  • 1,977
  • 15
  • 31
0

The big-O notation is one kind of asymptotic notation. Asymptotic notation is an idea from mathematics, which describes the behavior of functions "in the limit" - as you approach infinity.

The problem with defining how long an algorithm takes to run is that you usually can't give an answer in milliseconds because it depends on the machine, and you can't give an answer in clock cycles or as an operation count because that would be too specific to particular data to be useful.

The simple way of looking at asymptotic notation is that it discards all the constant factors in a function. Basically, a n2 will always be bigger that b n if n is sufficiently large (assuming everything is positive). Changing the constant factors a and b doesn't change that - it changes the specific value of n where a n2 is bigger, but doesn't change that it happens. So we say that O(n2) is bigger than O(n), and forget about those constants that we probably can't know anyway.

That's useful because the problems with bigger n are usually the ones where things slow down enough that we really care. If n is small enough, the time taken is small, and the gains available from choosing different algorithms are small. When n gets big, choosing a different algorithm can make a huge difference. Also, the problems happen in the real world are often much bigger than the ones we can easily test against - and often, they keep growing over time (e.g. as databases accumulate more data).

It's a useful mathematical model that abstracts away enough awkward-to-handle detail that useful results can be found, but it's not a perfect system. We don't deal with infinite problems in the real world, and there are plenty of times when problems are small enough that those constants are relevant for real-world performance, and sometimes you just have to time things with a clock.

The MIT OCW Introduction to Algorithms course is very good for this kind of thing. The videos and other materials are available for free, and the course book (not free) is among the best books available for algorithms.