To motivate the 'efficient' in the title: I am working with volumetric image data which can be up to 512x512x1000 pixels. So slow loops etc. are not really an option, particularly if the images need to be viewed in a GUI. Imagine sitting just 10s in front of a viewer waiting for images to load...
From two 3D input volumes x and y I calculate new 3D output volumes, currently up to three at a time e.g. by solving equation systems for each pixel. Since a lot of x,y combinations are actually repetitive and often even only a coherent meshgrid range is of interest, I am trying to speed up by creating a lookup table for this subregion. Works well, in my test case I need only ca. 3000 calculations instead of 30 million.
Now, to the problem: I am utterly failing at efficiently looking up the solutions of the 30 million x,y combinations from the 3000 solutions lookup table in a numpythonic way!
Let's try with an example:
# x y s1 s2
lookup = np.array([[ 4, 11, 23., 4. ],
[ 4, 12, 25., 13. ],
[ 5, 11, 21., 19. ],
[ 5, 12, 26., 56. ]])
I succeed in getting the index of one x,y pair following this post:
ii = np.where((lookup[:,0] == 4) & (lookup[:,1]==12))[0][0]
s1, s2 = lookup[ii,-2:]
print('At index',ii,':',s1,s2)
>>> At index 1 : 25.0 13.0
Q1: But how to vectorize this, i.e get full solutions arrays for the 30 million pixels?
s1, s2 = lookup[numpy_magic_with_xy, -2:]
Q2: And actually I'd like to set all solutions to zero for all x,y not within the region of interest. Where do I add that condition?
Q3: And what would really be the fastest way to achieve all this?
PS: I'm fine with using 1D image representations by working with x.ravel() etc. and reshaping at the end. Unless you tell me I don't need to and it's just slowing things down. Just doing it to still understand my own code I guess...