2

The definition in a book said

The "Big-Oh" Notation

Let f(n) and g(n) be functions mapping nonnegative integers to real numbers. We say that f(n) is O(g(n)) if there is a real constant c > 0 and an real constant n0 ≥ 1 such that

f(n) ≤cg(n), for n ≥ n0.

I couldn't able to understand terminologies used in formula and definition can somebody explain in plain English.

shakthydoss
  • 2,551
  • 6
  • 27
  • 36
  • Which part of the definition don't you understand? Words, phrasings? Do you know what a function is in the mathematical sense? – phant0m Jul 31 '13 at 08:31

3 Answers3

4

Basically, f(n) is O(g(n)) then g(n) is proportional to the worst-case scenario of f(x).

For example, binary search is O(log n) (or O(ln n), which is equivalent). Why?

(Binary search works like this: take the middle element, and compare to the target. If it's the one, you're done. If it's bigger than the target, throw out the second half of the list and repeat on the first half; if it's smaller than the target, throw out the first half and repeat the search on the second half.)

Because you need 1 operation to find something in a list that is 3 elements long; 2 operations when it's 7 elements long; 3 if it is 15 elements long. Thus, when number of elements n is (2^x - 1) for any x, the number of operations is x; turn it around, and you'd say for number of elements n, number of operations is log_2 n. And say that each operation lasts 2 seconds (say you're comparing stuff by hand), and the worst time to search is log_2 n * 2 seconds. log_2 n can be rewritten as ln n / ln 2, so the formula becomes:

worst search time(n) = (ln n / ln 2) * 2 seconds
                     = (2 seconds / ln 2) * ln n

Now, 2 seconds / ln 2 is a constant; let's call it c. Let's call "search time for n elements" f(n). And let's call ln n as g(n).

We said before, if n = 3, g(3) <= c * ln 3 (because c * ln 3 is worst search time, real search time is always less or equal to that; but we could always find it on our first try). If n = 7, g(7) <= c * ln 7; etc.

The bit about n0 is just a guard that says the complexity we calculate for the small n might be a deviation, an anomaly, an exception from the rule, and if we go with big enough data (i.e. n >= n0), the rule becomes obvious and inviolate. In this case, the rule works pretty much from the start, but some algorithms might have extra costs that throw off the calculation on small numbers.

Amadan
  • 191,408
  • 23
  • 240
  • 301
1

Translation to "plain English": Imagine that f(n) are g(n) function that take a positive number or zero as input, and give a real number as output (no imaginary numbers).

Big-Oh allows us to compare two functions to see if one is bounded by the other. For example, an exponential function f(n) would not be bounded by a linear function g(n), so f(n) would not be O(g(n)).

We can say that f(n) is O(g(n)) if the following is possible: f(n) ≤ c * g(n) for n ≥ n0. If there is some way to solve the equation by plugging in for c and n0, then f(n) is O(g(n)).

For example (same as above), let f(n) = 2^n, g(n) = n. Is the following solvable: 2^n ≤ c * n for n ≥ n0? The answer is no. No matter what value is plugged into c, the left side will always be bigger than the right side as n approaches infinity. There is no way to make the left side smaller than the right side for all values n ≥ n0.

On the other hand, if f(n) = 2n, g(n) = n, then the condition is 2n ≤ c * n for n ≥ n0. This is solvable: c = 2, n0 = 0.

isaach1000
  • 1,819
  • 1
  • 13
  • 18
  • 1
    `"in the same set of functions"` - not sure what this means, but it sounds wrong. – Bernhard Barker Jul 31 '13 at 09:13
  • 1
    If `f(n)` is exponential and `g(n)` is linear, `f(n)` is indeed not in `O(g(n))`, but `g(n)` is in `O(f(n))`. So your explanation is a bit misleading - if it was just about whether they're different, then certainly the relation would be symmetric, no? – sepp2k Jul 31 '13 at 12:43
  • Agreed. I will edit this to be more precise. I was really focusing on the common use of big-Oh to classify algorithms as similar. – isaach1000 Jul 31 '13 at 12:54
0

Let f(n) and g(n) be functions mapping nonnegative integers to real numbers.

Let f(n) and g(n) be functions where the values of n i.e. domain is 0 or positive integers, the values of f(n) and g(n) for those values of n may be real numbers.

We say that f(n) is O(g(n)) if there is a real constant c > 0 and an real constant n0 ≥ 1 such that:

f(n) ≤cg(n), for n ≥ n0.

f(n) = O(g(n)) if there exist positive constants c and n0 such that 0 <= f(n) <= cg(n) for all n >= n0. Actually , it means that f(n) is asymptotically less than or equal to g(n).


For example, consider f(n) = 3 * n^2 + 5. We can show that f(n) is O(n^2) by choosing c = 4 and n0 = 2. This is because for all values of n greater than 2:

3 * n^2 + 5 <= 4 * n^2

f(n) is not O(n), because whatever constant c and value n0 you choose, I can always find a value of n greater than n0 so that 3 * n^2 + 5 is greater than c * n.

Community
  • 1
  • 1
AllTooSir
  • 48,828
  • 16
  • 130
  • 164