I am performing prioritized sweeping for which I have a matrix which has 1000*1000 cells (gridworld) whose cells I have to access repeatedly in a while true loop for assignment (I am not essentially iterating over the list but all cells are accessed more than once). Right now I'm mapping my positions (i,j) of matrix to store in a 1D array. So my 1000*1000 matrix is one large 1*(1000000) list. I am wondering if this is going to slow down fetch time and I am better off using 1000*1000 matrix. Also which is faster numpy or using lists ? It would be great if you could help me out with this!
Asked
Active
Viewed 397 times
1
-
You can use `timeit` to check. – Autonomous Apr 16 '18 at 17:58
-
Check this one: https://stackoverflow.com/questions/35232406/why-is-a-for-over-a-python-list-faster-than-over-a-numpy-array – FadeoN Apr 16 '18 at 18:32
-
Turns out numpy was slowing me down. Thanks! – SH_V95 Apr 16 '18 at 21:28