A matrix/vector multiplication looks like that and works fine:
A = np.array([[1, 0],
[0,1]]) # matrix
u = np.array([[3],[9]]) # column vector
U = np.dot(A,u) # U = A*u
U
Is there a way to keep the same structure if the elements Xx,Xy,Yx,Yy, u,v
are 2- or 3-dimensional arrays each ?
A = np.array([[Xx, Xy],
[Yx,Yy]])
u = np.array([[u],[v]])
U = np.dot(A,u)
The desired result is that and works fine (so the answer to my question is rather 'nice to have'):
U = Xx*u + Xy*v
V = Yx*u + Yy*v
Succes Story
THX to @loopy walt
and to @Ivan
!
1. it works :
np.einsum('ijmn,jkmn->ikmn', A, w)
and (w.T@A.T).T
, both reproduce the correct contravariant coordinate transformation.
2. minor issue :
np.einsum('ijmn,jkmn->ikmn', A, w)
and (w.T@A.T).T
, both return an array with a pair of square brackets too much (it should be 2-dimensional). See @Ivan 's answer why:
array([[[ 0, 50, 200],
[ 450, 800, 1250],
[1800, 2450, 3200]]])
3. to optimize : building the block matrix is time consuming:
A = np.array([[Xx, Xy],
[Yx,Yy]])
w = np.array([[u],[v]])
So the overall return time is for np.einsum('ijmn,jkmn->ikmn', A, w)
and (w.T@A.T).T
lager. Perhaps can this be optimized.
- p_einstein np.einsum('ijmn,jkmn->ikmn', A, w) 9.07 µs ± 267 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
- p_tensor: (w.T@A.T).T 7.89 µs ± 225 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
- p_vect: [Xx*u1 + Xy*v1, Yx*u1 + Yy*v1] 2.63 µs ± 173 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
- p_vect: including A = np.array([[Xx, Xy],[Yx,Yy]]) 7.66 µs ± 290 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)