1

This is likely a repost but I'm not sure what wording to use for the title.

I'm trying to subtract the values of arrays inside arrays by reshaping them to create a larger array.

xn = np.array([[1,2,3],[4,5,6]])
yn = np.array(([1,2,3,4,5], [6,7,8,9,10]])

xn.shape
Out[42]: (2, 3)

yn.shape
Out[43]: (2, 5)

The functionality I want is:

yn.reshape(2,-1,1) - xn

This throws a value error, but the below works just fine when I remove the first dimension as a factor:

yn.reshape(2,-1,1)[0] - xn[0]
Out[44]: 
array([[ 0, -1, -2],
       [ 1,  0, -1],
       [ 2,  1,  0],
       [ 3,  2,  1],
       [ 4,  3,  2]])

Which would be the first output I would expect because xn and yn both have a first dimension of 2.

Is there a proper way to do this with the desired broadcasting?

Desired output:

array([[[ 0, -1, -2],
        [ 1,  0, -1],
        [ 2,  1,  0],
        [ 3,  2,  1],
        [ 4,  3,  2]],
       [[2, 1, 0],
        [3, 2, 1],
        [4, 3, 2],
        [5, 4, 3],
        [6, 5, 4]]])
Real Person
  • 79
  • 1
  • 6
  • `yn[0]` has five items yet your expected result only has four rows - is that correct? – wwii Apr 05 '19 at 15:41

2 Answers2

1
>>> x
array([[1, 2, 3],
       [4, 5, 6]])
>>> y
array([[ 1,  2,  3,  4,  5],
       [ 6,  7,  8,  9, 10]])
>>> z = y.reshape(2,-1,1)

Add another axis to x:

>>> z-x[:,None,:]
array([[[ 0, -1, -2],
        [ 1,  0, -1],
        [ 2,  1,  0],
        [ 3,  2,  1],
        [ 4,  3,  2]],

       [[ 2,  1,  0],
        [ 3,  2,  1],
        [ 4,  3,  2],
        [ 5,  4,  3],
        [ 6,  5,  4]]])
>>>

Or just:

>>> y[...,None] - x[:,None,:]
array([[[ 0, -1, -2],
        [ 1,  0, -1],
        [ 2,  1,  0],
        [ 3,  2,  1],
        [ 4,  3,  2]],
       [[ 2,  1,  0],
        [ 3,  2,  1],
        [ 4,  3,  2],
        [ 5,  4,  3],
        [ 6,  5,  4]]])
wwii
  • 23,232
  • 7
  • 37
  • 77
  • 2
    `y[...,None] - x[:,None]` – rafaelc Apr 05 '19 at 15:48
  • Hmm, that works. I didn't know you could do that. I think I like `x[:,None,:]`, it's explicit and shows that last dimension. – wwii Apr 05 '19 at 16:19
  • I haven't seen that [...] notation before and this answer is slightly faster! Nice. – Real Person Apr 05 '19 at 16:46
  • [`Ellipsis expand to the number of : objects needed to make a selection tuple of the same length as x.ndim`](https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html) – wwii Apr 05 '19 at 16:49
  • [What does the Python Ellipsis object do?](https://stackoverflow.com/questions/772124/what-does-the-python-ellipsis-object-do) – wwii Apr 05 '19 at 16:50
1

From broadcasting rules, to be able to broadcast the shapes must be equal or one of them needs to be equal to 1 (starting from trailing dimensions and moving forward). So swapping two last dimensions of xn will allow you to broadcast (after adding another dimension to xn):

yn.reshape(2, -1, 1) - xn.reshape(2, -1, 1).swapaxes(-1, -2)
array([[[ 0, -1, -2],
        [ 1,  0, -1],
        [ 2,  1,  0],
        [ 3,  2,  1],
        [ 4,  3,  2]],

       [[ 2,  1,  0],
        [ 3,  2,  1],
        [ 4,  3,  2],
        [ 5,  4,  3],
        [ 6,  5,  4]]])

The shape of yn.reshape(2, -1, 1) is (2, 5, 1) and the shape of xn.reshape(2, -1, 1).swapaxes(-1, -2) is (2, 1, 3). Now you can broadcast because the dimensions are equal or one of them is equal one by element-wise comparison starting from trailing dimensions.

Vlad
  • 8,225
  • 5
  • 33
  • 45