Imagine it's called...
listSum([3, 5, 3, 7])
It get down to...
return ls[0] + listSum(ls[1:])
...which is evaluated as...
return 3 + listSum([5, 3, 7])
Then listSum([5, 3, 7])
as 5 + listsum([3, 7])
.
Then listSum([3, 7])
as 3 + listsum([7])
.
Then listSum([7])
as 7 + listsum([])
, which is where the not ls
kicks in and returns 0
.
So, the complete thing is then evaluated as...
3 + 5 + 3 + 7 + 0
Notes:
listSum(ls[1:])
is shortening the list each time - reducing the complexity of the remaining problem, until an invocation of listSum
receives an empty list: the trivial case that if not ls: return 0
handles
recursive solutions are often made up of two parts: a solution for the simplest possible input (or sometimes, a couple very simple cases), and something saying how to take an arbitrarily complicated input and reduce it to a formula making one or more recusive calls that are each at least a little bit simpler, always moving towards that simplest possible input
"I've tried using listSum(ls[0:])
" - ls[0:]
returns a copy of ls
- it would ask for the same list to be processed ad infinitum
it's arguably more intuitive to explicitly handle a list of one element - if len(ls) == 1: return ls[0]
- but then if anyone called listSum([])
you'd try to access [0]
(in the other code below the if
) and raise an exception; handling the empty list makes the function easier to use if listSum([])
returning 0
is sane for your application, but on the other hand having listSum([])
raise an exception might uncover bugs where lists are unexpectedly empty - you can decide which is more useful to you; happily if you do handle the empty-list case, len(ls) == 1
case then "just works" with the same recursion logic