20

If the above condition in a programming question is given and I am solving it using recursion then am I violating the constraints? It could be because recursion also uses stack? Is it right?

Robin Green
  • 32,079
  • 16
  • 104
  • 187
Jainab Bano
  • 227
  • 1
  • 6
  • 1
    Technically, if you have proper [tail-call optimisation](http://en.wikipedia.org/wiki/Tail_call), then you can have a recursive solution to a problem, which only needs O(1) space on the stack, independent of the input. The recursion is then basically nothing but a loop. `Scheme`, for example, mandates tail-call elimination for, ah, tail-calls in all conforming implementations... – Dirk Jun 23 '14 at 08:34
  • 1
    This is exactly why quicksort isn't truly an in-place algorithm (if, by in-place, you mean `O(1)` space complexity...). – Bakuriu Jun 23 '14 at 08:36
  • 1
    @Bakuriu But quicksort can be programmed as a loop, and can be made `O(1)` space complexity, though it may lose in time complexity. See http://stackoverflow.com/questions/9096787/is-imperative-quicksort-in-situ-or-not – a06e Jun 23 '14 at 22:24

6 Answers6

27

If the depth of the stack (recursion) is constant and does not change with respect to the size of the input, then a recursive solution can be O(1) extra space.

Some compilers may do tail call optimization (TCO) and remove recursive calls if they are the last statement executed in any given code path through a function. With TCO, there is no call-stack related memory overhead.

However, keep in mind that the O(1) constraint may be being imposed to force you to choose a particular (probably non-recursive) algorithm, so relying on tail call optimisation may be unwise even if you know the compiler you are using has made the relevant transformation to your code. At the very least, if you rely on it, you should say so explicitly and justify your expectation that TCO will occur by reference to language specifications, compiler documentation and/or disassembly as appropriate.

al45tair
  • 4,405
  • 23
  • 30
perreal
  • 94,503
  • 21
  • 155
  • 181
  • 17
    That is an unusual kind of recursion, though. Usually it goes deeper as the input grows. – Thilo Jun 23 '14 at 07:34
  • 9
    @Thilo An example is an operator precedence-based parser, where `parse(9)` calls `parse(8)` which calls `parse(7)` etc. for each precedence level of operators. –  Jun 23 '14 at 07:35
  • Is there any non-contrived *recursion* that does only a constant number of calls? The closest to useful is the `if (something_off_about(parameter)) return f(fix(parameter));`, and I'm not sure (1) whether this is actually helpful (just re-assigning the parameters would do the trick) and (2) even considered recursion –  Jun 23 '14 at 07:37
  • 1
    @delnan There is, see my comment directly before yours. :) –  Jun 23 '14 at 07:37
  • 4
    @delnan, a search algo for a game may have a fixed number of calls if total number of moves is bounded. – perreal Jun 23 '14 at 07:39
  • @perreal For most interesting games, the search tree is so huge that even though it is technically constant, asymptotic complexity bounds are still applicable. In addition, when turning such a game into a computational challenge, especially when also adding complexity constraints, we generalize the game so that the size of the board, or some other parameter that affects the search tree, is variable and the `n` with respect to which we analyze complexity. –  Jun 23 '14 at 07:42
  • @hvd Yeah, seen that a bit too late and didn't want to delete my comment. I'm not sure whether it's a *good* example but it's certainly better than mine. –  Jun 23 '14 at 07:43
  • 2
    @delnan It's something I've really seen used in production code, at any rate. As for perreal's example, if a game always looks exactly (e.g.) four moves ahead, it can still be implemented with fixed-depth recursion. –  Jun 23 '14 at 07:48
  • 7
    Tail recursion can also lead to no stack space being used for languages supporting tail call elimination. – Tarik Jun 23 '14 at 08:33
  • @Tarik though you can only rely on TCO in certain languages (e.g. Scheme). C compilers, for example, can't always do TCO. Plus if I’d set a question like this one and you submitted something relying on TCO, I’d want an explanation that you were expecting tail call optimisation, that you knew what that meant and that you were using an appropriate language/implementation that did it. – al45tair Jun 23 '14 at 11:32
  • @alastair "though you can only rely on TCO in certain languages" already mentioned that. As for tail recursion definition and when it applies, there exists sites such as Wikipedia lying around on the Internet. – Tarik Jun 23 '14 at 14:13
  • @Tarik Again, if I set a question like this one, and your answer relied on TCO, I’d want to make sure you understood it — and I don’t just mean copying words from Wikipedia. I might, equally, be inclined to explicitly ban use of tail calls to answer the question (since the point is likely to pick a particular non-recursive algorithm). – al45tair Jun 23 '14 at 15:40
  • @alastair "I might, equally, be inclined to explicitly ban use of tail calls to answer the question" Go ahead, that's the strongest argument you made so far. By the way, I got too lazy to lookup TCO acronym on the Internet. You might be inclined to clarify :-) – Tarik Jun 23 '14 at 20:15
  • @perreal: Please add a note to your answer regarding [tail recursion](http://stackoverflow.com/questions/33923/what-is-tail-recursion). – einpoklum Jun 24 '14 at 10:24
  • 2
    @einpoklum, tail recursion is very much language dependent so it can't be used for general claims. I think we can just say tail recursion is not recursion at runtime. – perreal Jun 24 '14 at 10:27
  • @perreal: But if we're analyzing the code's complexity, we can ignore whether or not the compiler actually implements tail recursion properly, and make the general claim. Anyway, what you said would make a good enough 'fine print' comment for your answer. – einpoklum Jun 24 '14 at 11:48
26

extra allowed space is O(1)

means that your program can use only a constant amount of space, say C.

Going by the definition of big-O, this means that the space that your program needs cannot depend on the size of the input, although C can be made arbitrarily large.

So if the recursion depends on the input a (which usually is the case), the space that your program needs is not O(1).

To clarify further :

  • A program which always uses 1 Mb uses O(1) space.

  • A program which always uses 1 Tb is using O(1) space b.

  • A program which uses N Mb, where N is a part of the input, does not use O(1) space, (it uses O(N) space).

Long story short, whenever you read O(1), just mentally replace it with constant.


a. For example, foo(n) = foo(n - 1), the stack space needed here to maintain the function calls is O(n).

b. When material on O notation comments on how the ignored constants can be troublesome, this is what they are talking about.

axiom
  • 8,765
  • 3
  • 36
  • 38
  • 4
    I very much like the comment on the 1Tb constant, I remember long ago asking a lecturer who taught us about the notation "Why can we not just calculate every possible input (up-to the program's max input size) into a table, then every problem could be O(1) for which at the time I got no answer. Or in the case of time why not just sleep for the time the worst case takes to ensure constant time. – Vality Jun 23 '14 at 12:13
  • @Vality: what was the lecturer's answer? I'm guessing it was that for theoretical purposes, there is no max input size; and for practical purposes, 'constant time' is not useful at the expense of speed. – LarsH Jun 23 '14 at 13:43
  • 2
    @LarsH The lecturer at the time I believe said that that was a limitation of the system but then referred to Turing machines where the numbers could be arbitrarily large. He did however note my concern and said that we cannot blindly apply the notation without looking at practise and limitations. – Vality Jun 23 '14 at 15:00
  • There's no need to sleep; any program that always runs in less then x seconds is O(1), since there's a constant for which it's always smaller then. Technically speaking, on finite real-world machines, all terminating problems are O(1), since it takes a finite amount of time to cycle through all states. (Well, all problems, since the computer will break eventually.) That's just not a helpful way to approach algorithmic analysis. – prosfilaes Jun 23 '14 at 21:49
  • Let's consider linear search algorithm as an example. Is it O(1) space? Suppose that the length of the input array is n. In order to index all elements in the array, the index variable needs O(lg n) bits. By definition, this algorithm is not O(1) space. Is it true? – Yu-Han Lyu Jun 23 '14 at 22:59
  • @nomen: If we have a computer with 256 bytes of memory running one instruction per second, then the computer can transition through no more than 256 ^ 256 states. Either it will return a result in those 256 ^ 256 steps, or it will not terminate. So there's our constant; for all n, calculating Ackermann (n, n) will take time less then or equal to 256 ^ 256 seconds if it can be done at all, and since that's a constant, this is O(1). – prosfilaes Jun 23 '14 at 23:29
  • @nomen Did you read my answer? Because i think i addressed that. A given computer has a constant amount of memory and thus any algorithm will use at most that constant amount of memory or fail. Since memory usage is bounded above by a constant for real machines, it's O(1). – prosfilaes Jun 24 '14 at 01:12
  • 4
    This should probably go to chat, but for the record, Big-O is used to show how resource usage grows compared with input size. You can't just say "it always takes less than 10 years (or TB), therefore it's constant," because that doesn't tell you what happens as the input grows. Finitude is not the issue -- even if the problem itself dictated that the input size was bounded by some constant, you'd still want to know how certain algorithms performed as input size grew *up to that constant.* – johncip Jun 24 '14 at 03:04
  • @johncip Landau notation doesn't tell you what happens as the input grows; it tells you what happens as the input goes to infinity. If the algorithm grows slowly for mathematically small n (say, less then 10^10) and then turns exponential, this notation will tell you it's exponential. Robert Sedgewick in his lectures complains about people using algorithms with better Big-O that will be faster only with problems that are bigger then every computer on Earth together could solve. – prosfilaes Jun 24 '14 at 07:49
  • @nomen If you can talk about Big-O of an algorithm running on a Turing Machine with an oracle, you can talk about the Big-O of an algorithm running on a finite state machine that closely models real life system. – prosfilaes Jun 24 '14 at 07:50
  • @nomen nomen's definition, not necessarily the one actually used by computer scientists. Cf. Turing machines with oracles. – prosfilaes Jun 24 '14 at 19:30
  • @nomen That link speaks of functions, not algorithms, not computers. No algorithm can take more then O(1) space on a real computer, because there will exist some n, such that for all problem sizes larger then that n, the algorithm does not complete and while not complete stays below a constant memory space. If you measure the memory usage of a failed algorithm, then it will be O(1); if you don't, it won't have a Big O, since you can't take a limit to infinity on a function that isn't defined for sufficiently large numbers. – prosfilaes Jun 25 '14 at 00:00
  • @nomen One who can not explain the basics does not truly understand them. My original claim was not that this is how the notation is used in practice, which I think is pretty clear from my original post. Again, it is a variation of a point that Sedgewick makes, that "as n goes to infinity" can obscure what happens when n goes to values that you can actually solve on a computer. – prosfilaes Jul 11 '14 at 07:32
  • @nomen The amount of memory used by an algorithm on a given computer is a function. That function is O(1), because it is bounded above. Furthermore, algorithms have a finite number of steps, and thus there are countably many of them; therefore they can not be the same as functions from real numbers to real numbers, which are not countable in any sense. – prosfilaes Jul 11 '14 at 19:18
  • @prosfilaes: you're just embarrassing yourself now. For your own sake, please just stop. I'm deleting all of my comments and I suggest you do the same, and get a good book on recursive function theory. (Hint: there are countably many recursive functions. A basic and trivial property) – nomen Jul 12 '14 at 02:58
8

If the depth of your recursion grows depending on the size of your input (which it usually does), then yes: You would be using an unbounded amount of stack memory. The requirement was to solve the problem with a fixed amount of memory.

Thilo
  • 257,207
  • 101
  • 511
  • 656
7

Regarding the other answers telling you that the amount of stack you must use is O(1), and must remain constant whatever the size of the input, if you wish to solve the problem in a recursive manner, it only leaves you with two possibilities:

  • Fixed-depth recursion, which means limiting the number of time the function is recursing.
  • Tail-recursion, which means that the recursive call to the function must be the last thing to be evaluated, so to trigger TCO. (tail call optimization) Roughly speaking, it means that since the recursive call is the last thing happening in funciton execution, instead of pushing the call context on the stack, the existing call context will be overwritten by the new one, effectively using a constant amount of stack space.
Laurent LA RIZZA
  • 2,905
  • 1
  • 23
  • 41
  • 1
    It's also possible for algorithms to be "for all practical purposes" O(1) even if the stack depth is not strictly constant. A few data structures, for example, would require O(lg(lg(N)) stack space, and might require a collection's items to outnumber all the electrons in the universe before it reached a stack depth of ten. Such a thing wouldn't technically be O(1), but one could treat it as though it were. – supercat Jun 23 '14 at 22:00
2

If you are using recursion to solve this problem then you are using the stack to pass data down the recursion tree. In this regard you are typically using more than O(1) space.

I do agree with the accepted answer but I want to point out that if you are using a language with tail call optimization (like clojure) then you can solve problems with O(1) space which will use more space with a language which does not have this feature (like java).

So the right answer also depends on the language being used.

Adam Arold
  • 29,285
  • 22
  • 112
  • 207
  • 2
    While you can manually eliminate tail calls with `recur` there is no tail call optimization in clojure due to a limitation with the JVM. [Related Answer](http://stackoverflow.com/questions/19462314/why-cant-tail-calls-be-optimized-in-jvm-based-lisps) – Andrew T Jun 23 '14 at 20:31
0

Storage Complexity of O(1) simply means you're algorithm must use a constant amount of storage. ie it must use the same amount of storage on a set of 10 elements as it does on 1000.

You should probably use iteration to accomplish this.

David Carpenter
  • 1,389
  • 2
  • 16
  • 29