How is computational complexity defined if the algorithm:
- ...yields many results? As a total (then an algorithm producing a set of
k
cannot be faster than O(k) ) or per element (then the estimate must be multiplied to compare it with non-set-producing algorithms)?- What about storage complexity - does an estimate reflect if the entire set needs to be present in memory at once or each consecutive element can be produced and discarded?
- ...has multiple parameters? A separate figure for each parameter or something combined?
An example that suits both cases is selecting k
elements from N
. E.g. is there a difference in the estimate depending on whether ~k
or ~N
steps are required?
I'd like to see some hard evidence: the formal definition of the term in these cases and/or how these ambiguities are eliminated in CS papers rather than just random thoughts and/or personal experience. The goal is to work out a complete, state-of-the-art solution to eliminate these ambiguities in my (and others') texts once and for all.
The questions that had me puzzled about this are: Unique (non-repeating) random numbers in O(1)?, How do you efficiently generate a list of K non-repeating integers between 0 and an upper bound N, Algorithm to select a single, random combination of values?, Efficiently selecting a set of random elements from a linked list.