I am watching the Berkley Uni online lecture and stuck on the below.
Problem: Assume you have a collection of CD that is already sorted. You want to find the list of CD with whose title starts with "Best Of."
Solution: We will use binary search to find the first case of "Best Of" and then we print until the tile is no longer "Best Of"
Additional question: Find the complexity of this Algorithm.
Upper Bound: Binary Search Upper Bound is O(log n), so once we have found it then we print let say k title. so it is O(logn + k)
Lower Bound: Binary Search lower Bound is Omega(1) assuming we are lucky and the record title is the middle title. In this case it is Omega(k)
This is the way I analyzed it.
But in the lecture, the lecturer used best case and worst case. I have two questions about it:
- Why need to use best case and worst case, aren't big-O and Omega considered as the best and worst cases the algorithm can perform?
His analysis was Worst Case : Theta(logn + k)
Best Case : Theta (k)If I use the concept of Worst Case as referring to the data and having nothing to do with algorithm then yep, his analysis is right. This is because assuming the worst case (CD title in the end or not found) then the Big O and Omega is both log n there it is theta(log n +k).
Assuming you do not do "best case" and "worst case", then how do you analyze the algorithm? Is my analysis right?