1

In the question here, many posters keep on referring to larger inputs, when talking about increasing n. Therefore, some of them say that an algorithm with O(1/n) complexity cannot exist. Now, lets say we have an algorithm where we specify that we want a list of size n by removing elements from an existing list of size m (therefore removing m - n elements, and n <= m obviously) . Clearly, increasing n leads to reduced computation times, as larger n implies less operations.

So does n then have to refer to the physical size of an input (like the size of the input list in a sorting algorithm) or can it be defined more abstractly as here?

I know that big O notation is defined for asymptotic behaviour, but what if we have an algorithm where a parameter n is clearly bounded from above, as it is here? Can we just say we assume n < m?

I know the example above is very silly, but I actually have a case (which is too complicated to post here, hence the silly example, but it has some parallels to this example) where computation time clearly decreases as n increases, and n is also bounded from above, and defining it differently such that it isn't bounded from above is not possible.

Thanks in advance

Community
  • 1
  • 1
Konrad
  • 2,207
  • 4
  • 20
  • 31
  • If you have a list size of `m` (finite) and you have to remove the first `n` then its complexity is O(1) (correct me if I'm wrong) – R. Schifini Mar 27 '16 at 00:13
  • I'm not so sure about that: I guess it depends on the type of language in that case (for example in some languages, a new list is created from the old list, so in that case it will have _m_ - _n_ complexity. But in other languages (where no copy is made) I guess the list can just get "cut off" at index _n_, in which case it will have O(1) complexity). However, I am talking of a more general case where the _n_ elements does not have to be contiguous, so then at least _n_ operations have to be performed to identify and remove the _n_ elements (not to mention the shifting of the other elements). – Konrad Mar 27 '16 at 00:18
  • Just because you called a value `n` doesn't mean that it's the value that controls the complexity of the algorithm. The complexity is given in terms of the input to the algorithm, which in this case appears to be `m-n`. You're trying to define the complexity of your algorithm by the number of elements *not* in your input. – beaker Mar 27 '16 at 15:30
  • @beaker but what if we really would like to give information as to how the computation time of the algorithm varies with respect to _n_ (not _m_ - _n_)? – Konrad Mar 27 '16 at 20:15
  • That is a statistic, that is not the time complexity of the algorithm. – beaker Mar 27 '16 at 20:15
  • @beaker so you are saying that complexity of an algorithm can _only_ be defined in terms of the size of a _physical_ input, not in terms of some other abstract quantity (which will (in my case) be a parameter to the algorithm)? – Konrad Mar 27 '16 at 20:17
  • I make no claims about "physical" or "abstract" quantities, because I have no idea what those mean to you. Time complexity is defined as a function of the number of elements in the input. – beaker Mar 27 '16 at 20:25

2 Answers2

1

In the case of your example you would say that the running time of the algorithm is, say, O(m-n) (or perhaps O(log(m-n)) or O((m-n)^2) depending on how each item is removed), so that you can actually convey the complexity in a useful manner. There is probably a way to reframe your problem to do this. The entire point of the notation is to concisely express the scalability of an algorithm. I expect you can find some function of the various parameters of the problem which expresses the 'difficulty' of the input (e.g. m-n) which may be something more abstract than size, and then a function of that difficulty (e.g. x^2) which converts the difficulty into space or time requirements.

Alex Hall
  • 34,833
  • 5
  • 57
  • 89
  • In the real case (the "complicated" problem I was referring to in my question) the complexity is very clearly O(1/n). I was wondering, since there is this thing about O(n) being defined asymptotically, that O(1/n) actually implies O(1), since 1/n = 0 as n -> infinity (which is not really useful to me). Is it appropriate to use big O in this case, in this manner then? I hear you when you mention that the complexity should be "conveyed in a useful manner" (exactly my intention!), but would not it not be _strictly_ more correct to say then that computation time is _proportional_ to 1/n? – Konrad Mar 27 '16 at 00:36
1

I'll use a conventional notation to re-demonstrate your example. We want a list of size k by removing elements from an existing list of size n. In this example, k is different from n, because n is the input size but k is just a parameter.

Depending on implementation, this problem can be done in either O(k) or O(n-k). If it's O(n-k) implementation, the increase of your input size still results in higher complexity. If it's O(k), the complexity of your algorithm depends on only a parameter but no input size. Either way, your input size doesn't attribute to the decrease of the complexity of the algorithm.

Yes, some algorithms have complexity that depends on only a parameter k, but not an input size n, given that we know what the parameter k is (or it can be computed in a trivial time). It's widely used in solving NP-complete problems. Parameterized complexity

But, I still don't know any algorithms that is O(1/n) where n is the size of the input.

HenryLee
  • 406
  • 3
  • 9
  • I have a data preprocessing algorithm that _clearly_ has O(1/n) complexity with regards to some parameter (the plot of time taken to reach the solution with respect to parameter _n_ clearly shows a graph that resembles 1/n closely). It is only one of the algorithms I am testing though: some of the other algorithms I am testing increase in computation time as _n_ increases. But, as you mentioned, it depends on implementation – Konrad Mar 27 '16 at 00:44
  • And by the way, assume that _n_ (in your notation) remains constant. The only parameter that is of interest here is _k_. – Konrad Mar 27 '16 at 00:44
  • @KonradKapp Is the n in your data preprocessing algorithm corresponded to the input size? – HenryLee Mar 27 '16 at 00:49
  • I did not keep to your notation in my first comment, sorry: let _n_ be input size, and _k_ the other parameter. I cannot edit my first comment anymore to fix this. I am assuming that I keep input size _n_ fixed, so that I may determine the effect of _k_ on computation time. – Konrad Mar 27 '16 at 00:53
  • By the way, I misread your last sentence: "...where _n_ is the _size of the input_". I think we understand each other now :) – Konrad Mar 27 '16 at 00:54
  • @KonradKapp Yes, you could find that kind of algorithm. – HenryLee Mar 27 '16 at 00:55
  • Can you think of an algorithm having complexity O(k / n)? – Ayush Mishra Mar 27 '16 at 01:42
  • 1
    @AyushMishra Nope, I can't. I believe if O(k/n) exists, O(1/n) will also exist, assuming that k is independent to n. – HenryLee Mar 27 '16 at 04:57
  • @AyushMishra I agree with HenryLee, but I firmly believe that algorithms with complexity O(n/k) (as opposed to O(k/n) ) exists, since we just discussed that O(1/k) complexity can indeed exist. – Konrad Mar 27 '16 at 20:13
  • Agreed. I have another example for O(n / k). Let the problem be of printing all multiples of 18 (say k) up-to given input n. Clearly the complexity of the best algorithm would be O(n / k). (where k is 18.) Is this example valid in above context OR Does the k here satisfy as a a parametric variable? Or is it just another input variable? – Ayush Mishra Mar 28 '16 at 02:13