At the moment, my code is written entirely using numpy arrays, np.array
.
Define m
as a np.array of 100 values, m.shape = (100,)
. There is also a multi-dimensional array, C.shape = (100,100)
.
The operation I would like to compute is
m^T * C * m
where m^T
should be of shape (1,100)
, m
of shape (100,1)
, and C
of shape (100,100)
.
I'm conflicted how to proceed. If I insist the data types must remain np.arrays
, then I should probably you numpy.dot()
or numpy.tensordot()
and specify the axis. That would be something like
import numpy as np
result = np.dot(C, m)
final = np.dot(m.T, result)
though m.T
is an array of the same shape as m
. Also, that's doing two individual operations instead of one.
Otherwise, I should convert everything into np.matrix
and proceed to use matrix multiplication there. The problem with this is I must convert all my np.arrays
into np.matrix
, do the operations, and then convert back to np.array
.
What is the most efficient and intelligent thing to do?
EDIT:
Based on the answers so far, I think np.dot(m^T, np.dot(C, m))
is probably the best way forward.