1

I am struggling to perform a conceptually simple algorithm using optimized numpy vectorized operations. In the code below I have data which is an array with a bunch of values, and coords of which entry i contains 3D spatial coordinates corresponding to data[i]. I want to populate the array max_data so that entry max_data[i,j,k] is the maximum of all entries of data such that the corresponding entries of coords fall in the range [ [i,i+1], [j,j+1], [k,k+1] ]. Example code that generates the data and implements the algorithm is below.

Is there any way to speed this up using numpy vectorized operations? I am running a version of this on arrays with ndata ~ 1e9 and it takes forever. I am not opposed to using other python libs.

import numpy as np
import time

shape = ( 20, 30, 40 )

ndata = int( 1e6 )

data = np.random.normal(  loc = 10, scale = 5, size = ndata ) 

coords = np.vstack( [ np.random.uniform( 0, shape[i], ndata )
                      for i in range( len( shape ) ) ] ).T


max_data = np.zeros( shape ) 

start = time.time()

for i in range( len( data ) ) :

    # shortcut to find bin indices when the bins are
    # [ range( shape[i] ) for i in range( len( shape ) ) ]

    bin_indices = tuple( coords[i].astype( int ) )  

    max_data[ bin_indices ] = max( max_data[ bin_indices ], data[ i ] )

elapsed = time.time() - start

print( 'elapsed: %.3e' % elapsed )  # 2.98 seconds on my computer 
jacob
  • 39
  • 4
  • update for anyone who faces a similar problem: i found `scipy.stats.binned_statistic_dd` which does exactly what I want, but is interestingly over 10x slower than the naive code in the question. – jacob May 28 '19 at 03:39

1 Answers1

1

Using the 2nd-fastest solution from https://stackoverflow.com/a/55226663/7207392 gives me a >30x speedup. If you are open to using pythran an even faster solution is available.

import numpy as np
from scipy import sparse
import time

shape = ( 20, 30, 40 )

ndata = int( 1e6 )

data = np.random.normal(  loc = 10, scale = 5, size = ndata ) 

coords = np.vstack( [ np.random.uniform( 0, shape[i], ndata )
                      for i in range( len( shape ) ) ] ).T

max_data = np.zeros( shape ) 

start = time.time()

for i in range( len( data ) ) :

    # shortcut to find bin indices when the bins are
    # [ range( shape[i] ) for i in range( len( shape ) ) ]

    bin_indices = tuple( coords[i].astype( int ) )  

    max_data[ bin_indices ] = max( max_data[ bin_indices ], data[ i ] )

elapsed = time.time() - start

print( 'elapsed: %.3e' % elapsed )  # 2.98 seconds on my computer 


start = time.time()

bin_indices = np.ravel_multi_index(coords.astype(int).T, shape)
aux = sparse.csr_matrix((data, bin_indices, np.arange(data.size+1)),
                        (data.size, np.prod(shape))).tocsc()
cut = aux.indptr.searchsorted(data.size)
max_data_pp = np.empty(shape)
max_data_pp.ravel()[:cut] = np.maximum.reduceat(aux.data, aux.indptr[:cut])

CLIPAT = 0

max_data_pp.ravel()[aux.indptr[:-1]==aux.indptr[1:]] = CLIPAT
max_data_pp[max_data_pp < CLIPAT] = CLIPAT

elapsed = time.time() - start

print( 'elapsed: %.3e' % elapsed )  # 2.98 seconds on my computer 


assert np.all(max_data == max_data_pp)

Sample run:

elapsed: 2.417e+00
elapsed: 6.387e-02
Paul Panzer
  • 51,835
  • 3
  • 54
  • 99