I can't get my head around how to answer this question, I do know how to find time complexity, but just can't make sense of this.
-
see [this related entry](https://stackoverflow.com/q/24202687/849891), which makes use of the http://en.wikipedia.org/wiki/Analysis_of_algorithms#Empirical_orders_of_growth technique. the short answer to _your_ Q is, it's impossible to tell exactly and for sure. – Will Ness Nov 21 '21 at 08:34
2 Answers
Time complexity expresses a relation between the rate of change of the input data and the time it takes to process the data for a particular algorithm.
This means that while algorithm A
with linear time complexity may take 1 day to complete processing n == 1000
, algorithm B
, also with linear time complexity, may take one week to complete processing the same data, quantity and quality wise. But when you know that algorithm A
is linear and takes 1 day to complete n == 1000
, you know that it takes roughly 10 days for A
to complete n == 10000
.
Now, you know that some algorithm with linear complexity takes one day. You are given a completely different algorithm (we must assume it is different because the time complexity is totally different) with complexity n log n
, and are asked to guess how long it takes? You cannot know. You don't know the constants that are in play. So it would be wrong to make assumptions based on a linear algorithm that you have tested.
If you want to guess the time a linearithmic algorithm takes, you need to test that algorithm and make assumptions based on tests that you made on that algorithm. That's because, and I want to emphasize that, time complexity expresses a relation between the rate of change of the quantity of the input data and the time it takes for an algorithm to finish. The relation is between this specific algorithm's times and its input.

- 5,990
- 2
- 13
- 32
-
if we knew that the linearithmic algo had processed in one day X% of the data the linear one had processed in a day, _then_ we could hazard a tentative prediction. and if we knew Y% for an hour of work, say, then we could make a prediction with pretty high certainty. – Will Ness Nov 21 '21 at 08:41
Let m be the length of the data. The O(N) algorithm takes C times m = 1 day to treat the data; C being a constant equal 1 day / m in this case.
Now the O(log N) algorithm should take C times log m to treat the data. However, the constant C in this case is not necessarily equal to the constant C in the first case.
Therefore, you can't tell how much the O log N algorithm will take to treat the same data. It could be less, equal or more time. However, the larger the data the better the second algorithm will perform.

- 10,810
- 2
- 26
- 40
-
not only C, the N is also unknown. also, the larger the N the _worse_ the second algo will be compared to the first, _in the context of this question_. (I know you meant something different, but that is not what this question is asking about) – Will Ness Nov 21 '21 at 08:42
-
@WillNess I meant that assuming we know the size of "some data" as stated in the question, we miss the value of C for the log N algorithm. – Tarik Nov 21 '21 at 14:15
-
no, we don't know `N` for a known piece of data, because `N` depends on the algorithm as well. but all we are told is that this algorithm is `O(N)`, but we don't know the ratio between data size and `N`. what I mean specifically, is http://en.wikipedia.org/wiki/Analysis_of_algorithms#Empirical_orders_of_growth, which allows us to approximate the `N log N` as `n ^ (1.0 + a)` and `a > 0` will depend on `N`'s value -- the bigger the `N` the closer `a` is to 0.0. And we don't know where we are on the N range. – Will Ness Nov 21 '21 at 15:05