The first solution provided a nice short idiom which uses numpy sortedsearch
which has the cost of a sort and many searches. But numpy has a fast route in its source code which is done in Python in fact, which can deal with equal bin edge range mathematically. This solution uses only a vectorized subtraction and multiplication and some comparisons instead.
This solution will follow numpy code for the search sorted, type imputations, and handles weights as well as complex numbers. It is basically the first solution combined with numpy histogram fast route, and some extra type, and iteration details, etc.
_range = range
def hist_np_laxis(a, bins=10, range=None, weights=None):
# Initialize empty histogram
N = a.shape[-1]
data2D = a.reshape(-1,N)
limit = bins*data2D.shape[0]
# gh-10322 means that type resolution rules are dependent on array
# shapes. To avoid this causing problems, we pick a type now and stick
# with it throughout.
bin_type = np.result_type(range[0], range[1], a)
if np.issubdtype(bin_type, np.integer):
bin_type = np.result_type(bin_type, float)
bin_edges = np.linspace(range[0],range[1],bins+1, endpoint=True, dtype=bin_type)
# Histogram is an integer or a float array depending on the weights.
if weights is None:
ntype = np.dtype(np.intp)
else:
ntype = weights.dtype
n = np.zeros(limit, ntype)
# Pre-compute histogram scaling factor
norm = bins / (range[1] - range[0])
# We set a block size, as this allows us to iterate over chunks when
# computing histograms, to minimize memory usage.
BLOCK = 65536
# We iterate over blocks here for two reasons: the first is that for
# large arrays, it is actually faster (for example for a 10^8 array it
# is 2x as fast) and it results in a memory footprint 3x lower in the
# limit of large arrays.
for i in _range(0, data2D.shape[0], BLOCK):
tmp_a = data2D[i:i+BLOCK]
block_size = tmp_a.shape[0]
if weights is None:
tmp_w = None
else:
tmp_w = weights[i:i + BLOCK]
# Only include values in the right range
keep = (tmp_a >= range[0])
keep &= (tmp_a <= range[1])
if not np.logical_and.reduce(np.logical_and.reduce(keep)):
tmp_a = tmp_a[keep]
if tmp_w is not None:
tmp_w = tmp_w[keep]
# This cast ensures no type promotions occur below, which gh-10322
# make unpredictable. Getting it wrong leads to precision errors
# like gh-8123.
tmp_a = tmp_a.astype(bin_edges.dtype, copy=False)
# Compute the bin indices, and for values that lie exactly on
# last_edge we need to subtract one
f_indices = (tmp_a - range[0]) * norm
indices = f_indices.astype(np.intp)
indices[indices == bins] -= 1
# The index computation is not guaranteed to give exactly
# consistent results within ~1 ULP of the bin edges.
decrement = tmp_a < bin_edges[indices]
indices[decrement] -= 1
# The last bin includes the right edge. The other bins do not.
increment = ((tmp_a >= bin_edges[indices + 1])
& (indices != bins - 1))
indices[increment] += 1
((bins*np.arange(i, i+block_size)[:,None] * keep)[keep].reshape(indices.shape) + indices).reshape(-1)
#indices = scaled_idx.reshape(-1)
# We now compute the histogram using bincount
if ntype.kind == 'c':
n.real += np.bincount(indices, weights=tmp_w.real,
minlength=limit)
n.imag += np.bincount(indices, weights=tmp_w.imag,
minlength=limit)
else:
n += np.bincount(indices, weights=tmp_w,
minlength=limit).astype(ntype)
n.shape = a.shape[:-1] + (bins,)
return n
data = np.random.randn(4, 5, 6)
out1 = hist_laxis(data, n_bins=200001, range_limits=(- 2.5, 2.5))
out2 = hist_np_laxis(data, bins=200001, range=(- 2.5, 2.5))
print(np.allclose(out1, out2))
True
%timeit hist_np_laxis(data, bins=21, range=(- 2.5, 2.5))
92.1 µs ± 504 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit hist_laxis(data, n_bins=21, range_limits=(- 2.5, 2.5))
55.1 µs ± 3.66 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Although the first solution is faster in the small example and even the larger example:
data = np.random.randn(400, 500, 6)
%timeit hist_np_laxis(data, bins=21, range=(- 2.5, 2.5))
264 ms ± 2.68 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit hist_laxis(data, n_bins=21, range_limits=(- 2.5, 2.5))
71.6 ms ± 377 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
It is not ALWAYS faster:
data = np.random.randn(400, 6, 500)
%timeit hist_np_laxis(data, bins=101, range=(- 2.5, 2.5))
71.5 ms ± 128 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit hist_laxis(data, n_bins=101, range_limits=(- 2.5, 2.5))
76.9 ms ± 137 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
However, the numpy variation is only faster when the last axis is large. And its a very slight increase. In all other cases I tried, the first solution is much faster regardless of bin count and size of the first 2 dimensions. The only important line ((bins*np.arange(i, i+block_size)[:,None] * keep)[keep].reshape(indices.shape) + indices).reshape(-1)
might be more optimizable though I have not found a faster method yet.
This would also imply the sheer number of vectorized operations of O(n) is outdoing the O(n log n) of the sort and repeated incremental searches.
However, realistic use cases will have a last axis with a lot of data and the prior axes with few. So in reality the samples in the first solution are too contrived to fit the desired performance.
Axis addition for histogram is noted as an issue in the numpy repo: https://github.com/numpy/numpy/issues/13166.
An xhistogram library has sought to solve this problem as well: https://xhistogram.readthedocs.io/en/latest/