In the question here, many posters keep on referring to larger inputs, when talking about increasing n. Therefore, some of them say that an algorithm with O(1/n) complexity cannot exist. Now, lets say we have an algorithm where we specify that we want a list of size n by removing elements from an existing list of size m (therefore removing m - n elements, and n <= m obviously) . Clearly, increasing n leads to reduced computation times, as larger n implies less operations.
So does n then have to refer to the physical size of an input (like the size of the input list in a sorting algorithm) or can it be defined more abstractly as here?
I know that big O notation is defined for asymptotic behaviour, but what if we have an algorithm where a parameter n is clearly bounded from above, as it is here? Can we just say we assume n < m?
I know the example above is very silly, but I actually have a case (which is too complicated to post here, hence the silly example, but it has some parallels to this example) where computation time clearly decreases as n increases, and n is also bounded from above, and defining it differently such that it isn't bounded from above is not possible.
Thanks in advance