I was doing some reading on logarithms and the rate of growth of the running time of algorithms.
I have, however, a problem understanding the Big-Ω (Big-Omega) notation.
I know that we use it for 'asymptotic lower bounds', and that we can express the idea that an algorithm takes at least a certain amount of time.
Consider this example:
var a = [1,2,3,4,5,6,7,8,9,10];
Somebody chooses a number. I write a program that tries to guess this number using a linear search (1, 2, 3, 4... until it guesses the number).
I can say that the running time of the algorithm would be a function of the size of its input, so these are true (n
is the number of elements in an array):
- Θ(n) (Big-Theta notation - asymptotically tight bound )
- O(n) (Big-O notation - upper bounds)
When it comes to Big-Ω, in my understanding, the algorithm's running time would be Ω(1) since it is the least amount of guesses needed to find the chosen number (if, for example, a player chose 1 (the first item in the array)).
I reckon this bacause this is the defintion I found on KhanAcademy:
Sometimes, we want to say that an algorithm takes at least a certain amount of time, without providing an upper bound. We use big-Ω notation; that's the Greek letter "omega."
Am I right to say that this algorithm's running time is Ω(1)? Is it also true that it is Ω(n), if yes - why ?