3

As an example, let

A = np.array([[a1, a2],
              [a3, a4]])
B = np.array([[b1, b2],
              [b3, b4]])
C = np.array([[c1, c2],
              [c3, c4]]),

let l = [A, B, C], and let

I = np.array([[1, 0, 1],
              [0, 1, 1]]).

I want to calculate the Matrix R

R = np.array([[a1 + c1, a2 + c2],
              [b3 + c3, b4 + c4]])

I.e. in general, I have a list of k nxm matrices stored in l and an index matrix I of dimensions nxk that specifies for each row of the result R which of the k matrices in l should be used in the calculation of that row of R. Above, the first row of I is [1, 0, 1] and thus A and C are used in the calculation of the first row of R.

Which numpy function can I use for this?

fabian789
  • 8,348
  • 4
  • 45
  • 91
  • I'm not familiar enough with numpy to give a complete solution but it seems like you should be able to reshape your data so that this becomes a simple multiplication operation between the coefficients matrix and the terms matrices. – Avish Apr 11 '16 at 07:35

3 Answers3

2

Assuming l as the input list of (n,m) shaped arrays, you can use the very efficient np.einsum, like so -

np.einsum('ij,jik->ik',I,l)

Sample run -

In [75]: # Inputs
    ...: a1,a2,a3,a4 = 4,6,8,2
    ...: b1,b2,b3,b4 = 5,11,4,3
    ...: c1,c2,c3,c4 = 6,99,2,5
    ...: 
    ...: A = np.array([[a1, a2],
    ...:               [a3, a4]])
    ...: B = np.array([[b1, b2],
    ...:               [b3, b4]])
    ...: C = np.array([[c1, c2],
    ...:               [c3, c4]])
    ...: l = [A,B,C]
    ...: 
    ...: I = np.array([[1, 0, 1],
    ...:               [0, 1, 1]])
    ...: # Expected output
    ...: R = np.array([[a1 + c1, a2 + c2],
    ...:               [b3 + c3, b4 + c4]])
    ...: 

In [76]: R
Out[76]: 
array([[ 10, 105],
       [  6,   8]])

In [77]: np.einsum('ij,jik->ik',I,l)
Out[77]: 
array([[ 10, 105],
       [  6,   8]])
Divakar
  • 218,885
  • 19
  • 262
  • 358
  • Wow, I did not know about `einsum`. Would you care to explain `ij, jik → ik`? – fabian789 Apr 14 '16 at 11:22
  • @fabian789 Basically we are elementwise multiplying the input arrays with `axis=0` from `I` aligned with `axis=1` from `l` and `axis=1` from `I` aligned with `axis=0` from `l`. Then, sum-reducing the first axis of the multiplication result. All of this is done in one efficient step with that einsum call. For more info, please refer to this link - http://stackoverflow.com/questions/26089893/understanding-numpys-einsum – Divakar Apr 14 '16 at 12:15
  • Cool! I am accepting this as it seems to be the most general. Thanks – fabian789 Apr 14 '16 at 13:38
1

Assuming:

terms = np.array([A, B, C])  # A,B,C are 2x2 matrices
coeffs = np.array([[1,0,1], [0,1,1]])

You can do:

# transpose terms to 2x2 matrix of [A,B,C] element vectors,
# with shape (2,2,3):
transposed_terms = np.transpose(terms, (1,2,0)) 
# tile coefficients to align:
tiled_coeffs = np.tile(coeffs, 2).reshape(2,2,3)
# element-wise multiplication and sum over last dimension
return np.sum(transposed_terms, tiled_coeffs, 2)

There's probably a simpler way to do this using something like numpy.tensordot, but my numpy-fu is not strong enough for this.

Avish
  • 4,516
  • 19
  • 28
1

I agree with @Avish that this is very much like a matrix multiplication but in the end you just want the diagonal entries:

np.diagonal(l.T.dot(I.T).T).T

How it works:

l.T will give [[a1, b1, c1], [a3, b3, c3], [a2, b2, c2], [a4, b4, c4]]

l.T.dot(I.T) will give [[[a1+c1, b1+c1], [a3+c3, b3+c3]], [[a2+c2, b2+c2], [a4+c4, b4+c4]]]

l.T.dot(I.T).T is [[[a1+c1, a2+c2], [a3+c3, a4+c4]], [[b1+c1, b2+c2], [b3+c3, b4+c4]]]

And the diagonal entries (np.diagonal(l.T.dot(I.T).T)) are [[a1+c1, b3+c3], [a2+c2, b4+c4]]

Finally, its transpose is: [[a1+c1, a2+c2], [b3+c3, b4+c4]]

ayhan
  • 70,170
  • 20
  • 182
  • 203