2

The problem statement is simple: given an arbitrary amount of NumPy one-dimensional vectors of floats, as such:

v1 = numpy.array([0, 0, 0.5, 0.5, 1, 1, 1, 1, 0, 0])
v2 = numpy.array([4, 4, 4, 5, 5, 0, 0])
v3 = numpy.array([1.1, 1.1, 1.2])
v4 = numpy.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10])

What is the fastest way to sum them?

many_vectors = [v1, v2, v3, v4]

Using a direct sum function will not work because they can be of arbitrary uneven length:

>>> result = sum(many_vectors)
ValueError: operands could not be broadcast together with shapes (10,) (7,)

Instead, one can have recourse to the pandas library which will offer a simple fillna parameter to avoid this problem.

 >>> pandas.DataFrame(v for v in many_vectors).fillna(0.0).sum().values
 array([ 5.1,  5.1,  5.7,  5.5,  6. ,  1. ,  1. ,  1. ,  0. ,  0. ,  0. ,
    0. ,  0. ,  0. ,  0. , 10. ])

But this is probably not the most optimized way of proceeding as production use cases will have much larger amounts of data.

In [9]: %timeit pandas.DataFrame(v for v in many_vectors).fillna(0.0).sum().values
1.16 ms ± 97.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Divakar
  • 218,885
  • 19
  • 262
  • 358
xApple
  • 6,150
  • 9
  • 48
  • 49

2 Answers2

2

Approach #1

With such huge input array sizes and a huger number of arrays, we need to be memory efficient and hence would suggest a loopy one that iteratively adds up one array at a time -

many_vectors = [v1, v2, v3, v4] # list of all vectors

lens = [len(i) for i in many_vectors]
L = max(lens)
out = np.zeros(L)
for l,v in zip(lens,many_vectors):
    out[:l] += v

Approach #2

Another almost-vectorized one with masking to generate a regular 2D array from the list of those irregular shaped vectors/arrays and then summing along columns for the final output -

# Inspired by https://stackoverflow.com/a/38619350/ @Divakar
def stack1Darrs(v):
    lens = np.array([len(item) for item in v])
    mask = lens[:,None] > np.arange(lens.max())
    out_dtype = np.result_type(*[i.dtype for i in v])
    out = np.zeros(mask.shape,dtype=out_dtype)
    out[mask] = np.concatenate(v)
    return out

out = stack1Darrs(many_vectors).sum(0)
Divakar
  • 218,885
  • 19
  • 262
  • 358
0

The credit goes @Divakar. This answer only expands and improves upon his answer. In particular I rewrote the functions to match our style guide and timed them.

Two approaches are possible:

Approach #1

###############################################################################
def sum_vectors_with_padding_1(vectors):
    """Given an arbitrary amount of NumPy one-dimensional vectors of floats,
    do an element-wise sum, padding with 0 any that are shorter than the
    longest array (see https://stackoverflow.com/questions/56166217).
    """
    import numpy
    all_lengths = [len(i) for i in vectors]
    max_length  = max(all_lengths)
    out         = numpy.zeros(max_length)
    for l,v in zip(all_lengths, vectors): out[:l] += v
    return out

Approach #2

###############################################################################
def sum_vectors_with_padding_2(vectors):
    """Given an arbitrary amount of NumPy one-dimensional vectors of floats,
    do an element-wise sum, padding with 0 any that are shorter than the
    longest array (see https://stackoverflow.com/questions/56166217).
    """
    import numpy
    all_lengths = numpy.array([len(item) for item in vectors])
    mask        = all_lengths[:,None] > numpy.arange(all_lengths.max())
    out_dtype   = numpy.result_type(*[i.dtype for i in vectors])
    out         = numpy.zeros(mask.shape, dtype=out_dtype)
    out[mask]   = numpy.concatenate(vectors)
    return out.sum(axis=0)

Timing

>>> v1 = numpy.array([0, 0, 0.5, 0.5, 1, 1, 1, 1, 0, 0])
>>> v2 = numpy.array([4, 4, 4, 5, 5, 0, 0])
>>> v3 = numpy.array([1.1, 1.1, 1.2])
>>> v4 = numpy.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10])
>>> many_vectors = [v1, v2, v3, v4]
>>> %timeit sum_vectors_with_padding_1(many_vectors)
12 µs ± 645 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
>>> %timeit sum_vectors_with_padding_2(many_vectors)
22.6 µs ± 669 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

So it seems like method 1 is even better!

xApple
  • 6,150
  • 9
  • 48
  • 49