Indexing Numpy ndarray performance worse than (Python) lists?
Some background. For solving Sudoku’s I converted the problem to an exact cover problem and solved it in Python using Knuth's "algorithm X", but (obviously?) without the dancing links. It all seems to works fine, but performance is somewhat slow compared to the more traditional backtracking search combined with arc-consistency. About 75% of the time is spent in a function I called 'cover'. Which does basically al lot of list indexing. Since Python lists are implemented as dynamic arrays of pointers to Python objects, and Numpy ndarrays are fixed size arrays with all elements having the same size (like C arrays), I thought converting Python lists to ndarrays would speed up my program.
import numpy as np
def cover(row_valid, col_valid):
for cnt in range(10000):
for j,v in enumerate(row[20]):
if v == 1:
# for all values = 1 in row r set col c invalid
col_valid[j] = 1 # hack
# for all valid rows r': if row[r][i] == 1 and row[r'][i] == 1:
# set row r' invalid (this includes r itself)
for row_idx, valid in enumerate(row_valid):
if valid == 1: # if row is valid
if row[row_idx][j] == 1:
row_valid[row_idx] = 1 # hack
# version with Python lists
row = [[0 for c in range(100)] for r in range(100)]
row_valid = 100 * [1]
col_valid = 100 * [1]
# version with Numpy ndarrays
row = np.zeros((100, 100), dtype=np.uint8)
row_valid = np.ones(100, dtype=np.uint8)
col_valid = np.ones(100, dtype=np.uint8)
# run both versions using
import time
start_time = time.time()
cover(row_valid, col_valid)
end_time = time.time()
hrs, rem = divmod(end_time-start_time, 3600)
mins, secs = divmod(rem, 60)
print("duration [hh:mm:ss]: {:0>2}:{:0>2}:{:05.2f}".format(int(hrs),int(mins),secs))
print()
The function cover isn't doing anything useful, I modified it a bit to make it repeatable. But the structure is the same as the original.
The Numpy ndarray-version is roughly 25 times slower than the lists-version. Any ideas why this is so? Is there a way to improve the performance of the function cover?