0

I am trying to code with problem:

There is a robot on an m x n grid. The robot is initially located at the top-left corner (i.e., grid[0][0]). The robot tries to move to the bottom-right corner (i.e., grid[m - 1][n - 1]). The robot can only move either down or right at any point in time. Given the two integers m and n, return the number of possible unique paths that the robot can take to reach the bottom-right corner. I have successfully created the code for it and I am in the process of optimizing the code to run faster.

My first method is:

gridTraveler_memo = {}
def gridTraveler(m,n):
    if (m and n) not in gridTraveler_memo:
        if m==1 and n==1:
            return 1
        elif m==0 or n==0:
            return 0 
        else:
            gridTraveler_memo[m,n] = gridTraveler(m-1,n) + gridTraveler(m,n-1)
    return  gridTraveler_memo[m,n]

print(gridTraveler(18,18))

My second method is using functools.cache (uisng the answer for here):

import functools

@functools.cache

def gridTraveler(m,n):
    if m==1 and n==1:
        return 1
    elif m==0 or n==0:
        return 0 
    else: 
        return gridTraveler(m-1,n) + gridTraveler(m,n-1)

print(gridTraveler(18,18))

My second set of code works a lot faster than my first set of code. However i don't fully understand "functools.cache" function, as i am new to python.

In my limited expertise, it seems that the first set code is more of a good pratice for the future. For example when working on a big project, using "functools.cache" means you are using more memory. Am i correct in assuming that

  • If you're concerned about memory utilisation then you could revert to lru_cache and specify a limit – DarkKnight Jul 08 '22 at 10:16
  • `functools.cache` stores values in a `dict` very similar to your implementation, it will use slightly more memory but the difference won't be debilitating. IMO, using tools from the std library is very much preferred to implementing something yourself. Readably counts, especially when working on a large project or team, when you implement something yourself it increases the time taken to read and understand your code – Iain Shelvington Jul 08 '22 at 10:24
  • @IainShelvington, in the case of a large project with multiple functions and code, are you saying that functools.cache is still more useful. And the memory used by functools.cache for a big project is still negligible – rexy london Jul 08 '22 at 10:27
  • @rexylondon yes it's more useful IMO, most Python developers will know at a glance what you're doing and the less code you write the less chance of introducing bugs and the more beautiful it is. The memory usage is negligible , Python is notoriously memory hungry and the few extra bytes used will be a drop in the ocean – Iain Shelvington Jul 08 '22 at 10:31

0 Answers0