-3

I am having difficulty understanding how to find the exact runtime of a function when only given an input size and the Big-O notation for the function. Could someone explain how to do the following example problem?

An algorithm takes 0.5 ms for input size 100. How long will it take for input size 500 if the running time is the following (assume low-order terms are negligible)?
a. linear
b. O(N logN)
c. quadratic
d. cubic

inordirection
  • 959
  • 6
  • 16
bpreiss12
  • 7
  • 2
  • 3
    Big O measures worst case so there's really no way to know. Many algorithms have optimizations that work faster or slower depending on what the input is, not just how big it is. Other than that, this is pretty simple – Garr Godfrey Mar 31 '21 at 22:09
  • So how would I determine how long an input of size 500 would take in milliseconds? – bpreiss12 Mar 31 '21 at 22:12
  • 1
    @bpreiss12 The asker probably intends for you to come up with a function for each one. Put differently, if `f(100)` is .5 ms, what is `f(500)` in each case? Linear, for example would take the form `f = aN`. Solving for `a` would get you your equation. – General Grievance Apr 02 '21 at 15:36
  • 1
    The asker has massively misunderstood Big O if they are asking for an "exact runtime". If they had asked for an *approximate* runtime, it would be a much more sensible question – Caleth Apr 02 '21 at 15:43

1 Answers1

0

Shortly put, you treat relative execution time as an appropriate function of relative input size, and you select the 'appropriate function' based on the assumed complexity of the algorithm.

Specifically, a linear algorithm's execution time will scale as a linear function of relative input size, a quadratic algorithm's execution time scales quadratically with input size, etc.

In this example, you know the algorithm takes 0.5ms to execute on input size 100, and if you want to know how it performs on input size 500, the first question you ask yourself is 'What is the relative change in input size?' (ie 'by what factor did it change?') Answer = new_size/old_size = 500/100 = 5: the input is 5 times larger.

Next, to determine (estimate) the new execution time, you apply the appropriate function to this scale factor and the original execution time based on the algorithm's assumed complexity. Let's call this function for the new execution time T, denoting the initial time as t_0 and the input scale factor as f. You evaluate T to get the new execution time using the assumed complexity as so:

Linear:       T(f,t_0) = (f)t_0      = (5)0.5ms       = 0.25ms 
Linearithmic: T(f,t_0) = flog(f)t_0  = (5log(5))0.5ms = 0.58ms (assuming base 2)
Quadratic:    T(f,t_0) = (f^2)t_0    = (5^2)0.5ms     = 1.25ms
Cubic:        T(f,t_0) = (f^3)t_0    = (5^3)0.5ms     = 62.5ms

You can generalize this approach to other situations where you want to estimate the execution time of an algorithm on a new input size based on its assumed complexity and a datapoint about its execution time on some input by calculating the new scale factor 'f' and plugging in the known time for t_0.

inordirection
  • 959
  • 6
  • 16