Obviously,your thinking is correct!
As the algorithm goes for the size(n),IN THE WORST-CASE,if size is increased the speed will go inversely with n^3.
Your assumption is correct.You just divide after putting values for n=900,000 by n=90 and obtain the result.This factor will be the slowing factor!
Here,slowing factor = (900,000)^3 / (90)^3 = 10^12 !
Hence, slow factor=10^12 .Your program will slow down by a factor of 10^12 !!!Such a drastic change!Or in other words,your efficiency will fall 10^(-12) times!!!
EDIT based on suggested comments :-
But how do I compare an algorithm's performance for different inputs
based on just the worst case time complexity ?
As hinted by G.Bach,one of the commentator on this post,The basic idea underlying your question is itself contradictory!You should have talked about Big-Theta
Notation,instead of Big-O
to think of a general solution! Big-O is an upper-bound whereas Big-Theta is a tight bound. When people only worry about what's the worst that can happen, big-O is sufficient; i.e. it says that "it can't get much worse than this". The tighter the bound the better, of course, but a tight bound isn't always easy to compute.
So,in worst case analysis your question would be answered the way both of us have answered.Now,one narrow reason that people use O
instead of Ω
is to drop disclaimers about worst or average cases.But,for a tighter analysis you'll have to check for both O
and big-Omega
as well to frame a question. This question can suitably be found solution for varied size of n though.I leave it for you to figure out.But,if you have any doubts,PLEASE,feel free to comment.
Allso,your running time is not directly related to worst case analysis,but,somehow it is related and can be reframed this way!
P.S.---> I have taken some ideas/statements from Big-oh vs big-theta. So,THANKS to the authors too!
I hope it helps.Feel free to comment if any info skimmed!