So I'm making a Conway's Game of Life in Python 3 and I have this function called updateboard
that gives birth and kills cells based on their neighbor count (from 0 to 8) stored in self.neighbors
. The function looks like this:
def updateboard(self):
"""
Updates the board state
"""
alive = np.zeros((self.cellsize, self.cellsize), dtype=np.bool) # board state for next loop
# iterate cells
for y in range(self.cellsize):
for x in range(self.cellsize):
if self.neighbors[x, y] > 0: # only look at cells with more than 1 neighbors
if self.board[x, y]: # cell is currently alive
alive[x, y] = True if self.neighbors[x, y] in (2, 3) else False
else: # cell is currently dead
alive[x, y] = True if self.neighbors[x, y] == 3 else False
# update states
self.updateneighbors(self.board, alive)
self.board = alive
To avoid redundant checks, I am checking whether self.neighbors
at that cell is greater than 0 before deciding whether the cell lives or dies.
I was trying out different things to optimize this function and I found out that changing if self.neighbors[x, y] > 0
to if self.neighbors[x, y]
significantly sped up the function.
Running the python profiler shows how this one change made the function almost 6 times faster.
Before
ncalls tottime percall cumtime percall filename:lineno(function)
459 12.465 0.027 13.916 0.030 logic.py:55(updateboard)
After
ncalls tottime percall cumtime percall filename:lineno(function)
460 1.619 0.004 3.067 0.007 logic.py:55(updateboard)
I tried looking for an explanation online and found many similar questions but haven't managed to find an answer to this specific question yet. I am both very confused and surprised how this one small change made such a difference and would greatly appreciate it if someone could help explain this to me.