4

While trying to explain recursive algorithms to generate permutations in lexicographic order, I provided this simple algorithm:

def permute(done, remaining):
  if not remaining:
    print done
    return

  sorted_rem = sorted(remaining)
  l = len(sorted_rem)

  for i in xrange(0, l):
    c = sorted_rem[i]

    # Move to c to done portion.
    done.append(c)
    remaining.remove(c)

    # Permute the remaining
    permute(done, remaining)

    # Put c back.
    remaining.append(c)
    # Remove from done.
    del done[-1]

def main():
  permute([], [1,2,3,4])

if __name__ == "__main__":
  main()

First question: It seems a bit wasteful to me and I wonder what the time complexity it really has, given a pythonic implementation like this?

Note that the optimal time complexity will be O(n * n!), as we need to print n! permutations of size n.

I am guessing because of the sorted (which I presume is O(n log n) in python), an additional log n factor will be added (which I suppose is practically negligible for the n we can use this program for).

The second part of the question is to optimize it a bit.

Second question: Assuming that we can implement sorted in O(n) time, and append, remove and del[-1] in O(1) time, what would be the resulting time complexity?

Community
  • 1
  • 1

2 Answers2

1

I believe there is a proof that the runtime is indeed O(n*n!).

(Inspired by an earlier SO question here: complexity of recursive string permutation function)

We have the following recursion for the time spent, without printing:

T(n) = n*T(n-1) + O(n^2)

Now if U(n) = T(n)/n! then we must have that

U(n) = U(n-1) + O(n^2/n!)

This is a telescoping series.

Thus we get

U(n) = U(1) + 2^2/2! + 3^2/3! + ... + n^2/n!

Using the power series for e^x, multiply differentiate by x a couple of times, we see that 2^2/2! + 3^2/3! + ... + n^2/n! = O(1)

Thus

T(n) = O(n!).

This is the time spent without printing.

Thus the total time with printing is O(n * n!).

This also proves that it does not matter what the running time of sorted etc are, as long as they are polynomial, this algorithm will be asymptotically optimal.

The constant will probably be bad, and that is what would actually matter when dealing with n*n!.

Community
  • 1
  • 1
0

I have no idea of 'pythonic' way of doing things but I suppose transforming a sequence (given in an array) into the next lexicographically permutation might be easier than recursive constructing it (especially with multiple deletions from collection and sorting). The next permutation can be found in linear time like this:

  • find the descending suffix (linear scan from the end backwards)
  • if there is a symbol just before the suffix (equivalent to testing if the current pos is greater than 0)
    • swap it with the least greater symbol in the suffix (linear scan or bsearch)
  • reverse the suffix (linear)

Below is a simple visualization of the proces on five examples. The bar indicates the suffix position.

12345 - 1234|5 - 1235|4 - 1235|4 - 12354
13452 - 134|52 - 135|42 - 135|24 - 13524
35421 - 3|5421 - 4|5321 - 4|1235 - 41235
54312 - 5431|2 - 5432|1 - 5432|1 - 54321
54321 - |54321 - |54321 - |12345 - 12345

Advantages:

  • processing in situ - no additional copies of collections needed (variables done, remaining, sorted_rem)
  • no sorting of collections and no deleting (and compacting after deletion)
  • NO RECURSION with its stack consumption
  • easy access to the result - no need to modify a code for replacing print(done) with any other use(done)
CiaPan
  • 9,381
  • 2
  • 21
  • 35