121

I'm trying to multiply each of the terms in a 2D array by the corresponding terms in a 1D array. This is very easy if I want to multiply every column by the 1D array, as shown in the numpy.multiply function. But I want to do the opposite, multiply each term in the row. In other words I want to multiply:

[1,2,3]   [0]
[4,5,6] * [1]
[7,8,9]   [2]

and get

[0,0,0]
[4,5,6]
[14,16,18]

but instead I get

[0,2,6]
[0,5,12]
[0,8,18]

Does anyone know if there's an elegant way to do that with numpy? Thanks a lot, Alex

Alex S
  • 4,726
  • 7
  • 39
  • 67
  • 3
    Ah I figured it out just as I submitted the question. First transpose the square matrix, multiply, then transpose the answer. – Alex S Aug 29 '13 at 22:56
  • Better to transpose the row to a column matrix then you don't have to re-transpose the answer. If `A * B` you'd have to do `A * B[...,None]` which transposes `B` by adding a new axis (`None`). – askewchan Aug 30 '13 at 02:16
  • Thanks, that's true. The problem is when you have a 1D array calling .transpose() or .T on it doesn't turn it into a column array, it leaves it as a row, so as far as I know you have to define it as a column right off the bat. Like `x = [[1],[2],[3]]` or something. – Alex S Sep 03 '13 at 19:59

7 Answers7

150

Normal multiplication like you showed:

>>> import numpy as np
>>> m = np.array([[1,2,3],[4,5,6],[7,8,9]])
>>> c = np.array([0,1,2])
>>> m * c
array([[ 0,  2,  6],
       [ 0,  5, 12],
       [ 0,  8, 18]])

If you add an axis, it will multiply the way you want:

>>> m * c[:, np.newaxis]
array([[ 0,  0,  0],
       [ 4,  5,  6],
       [14, 16, 18]])

You could also transpose twice:

>>> (m.T * c).T
array([[ 0,  0,  0],
       [ 4,  5,  6],
       [14, 16, 18]])
jterrace
  • 64,866
  • 22
  • 157
  • 202
  • With new axis method it possible to multiply two 1D arrays and generate a 2D array. E.g `[a,b] op [c,d] -> [[a*c, b*c], [a*d, b*d]]`. – kon psych Jun 27 '15 at 09:02
80

I've compared the different options for speed and found that – much to my surprise – all options (except diag) are equally fast. I personally use

A * b[:, None]

(or (A.T * b).T) because it's short.

enter image description here


Code to reproduce the plot:

import numpy
import perfplot


def newaxis(data):
    A, b = data
    return A * b[:, numpy.newaxis]


def none(data):
    A, b = data
    return A * b[:, None]


def double_transpose(data):
    A, b = data
    return (A.T * b).T


def double_transpose_contiguous(data):
    A, b = data
    return numpy.ascontiguousarray((A.T * b).T)


def diag_dot(data):
    A, b = data
    return numpy.dot(numpy.diag(b), A)


def einsum(data):
    A, b = data
    return numpy.einsum("ij,i->ij", A, b)


perfplot.save(
    "p.png",
    setup=lambda n: (numpy.random.rand(n, n), numpy.random.rand(n)),
    kernels=[
        newaxis,
        none,
        double_transpose,
        double_transpose_contiguous,
        diag_dot,
        einsum,
    ],
    n_range=[2 ** k for k in range(13)],
    xlabel="len(A), len(b)",
)
Nico Schlömer
  • 53,797
  • 27
  • 201
  • 249
18

You could also use matrix multiplication (aka dot product):

a = [[1,2,3],[4,5,6],[7,8,9]]
b = [0,1,2]
c = numpy.diag(b)

numpy.dot(c,a)

Which is more elegant is probably a matter of taste.

James K
  • 3,692
  • 1
  • 28
  • 36
  • 12
    `dot` is really overkill here. You're just doing unnecessary multiplication by 0 and additions to 0. – Bi Rico Aug 30 '13 at 06:18
  • 2
    this might also trigger memory issues in case you want to multipy an nx1 vector to an nxd matrix where d is larger than n. – Jonasson Mar 22 '17 at 09:52
  • Downvoting as this is slow _and_ uses a lot of memory when creating the dense `diag` matrix. – Nico Schlömer Jun 25 '18 at 17:09
17

Yet another trick (as of v1.6)

A=np.arange(1,10).reshape(3,3)
b=np.arange(3)

np.einsum('ij,i->ij',A,b)

I'm proficient with the numpy broadcasting (newaxis), but I'm still finding my way around this new einsum tool. So I had play around a bit to find this solution.

Timings (using Ipython timeit):

einsum: 4.9 micro
transpose: 8.1 micro
newaxis: 8.35 micro
dot-diag: 10.5 micro

Incidentally, changing a i to j, np.einsum('ij,j->ij',A,b), produces the matrix that Alex does not want. And np.einsum('ji,j->ji',A,b) does, in effect, the double transpose.

hpaulj
  • 221,503
  • 14
  • 230
  • 353
  • 1
    If you will time this on computer with arrays large enough that it take at least a few milliseconds and post the results [here](http://stackoverflow.com/questions/18365073/why-is-numpys-einsum-faster-than-numpys-built-in-functions) along with your relevant system information it would be much appreciated. – Daniel Aug 30 '13 at 01:44
  • 1
    with a larger array (100x100) the relative numbers are about the same. `einsumm` (25 micro)is twice as fast as the others (dot-diag slows down more). This is np 1.7, freshly compiled with 'libatlas3gf-sse2' and 'libatlas-base-dev' (Ubuntu 10.4, single processor). `timeit` gives the best of 10000 loops. – hpaulj Aug 30 '13 at 03:12
  • 1
    This is a great answer and I think it is the one that should have been accepted. However, the code written above does, in fact, give the matrix Alex was trying to avoid (on my machine). The one hpaulj said is wrong is actually the right one. – Yair Daon Oct 10 '14 at 15:56
  • The timings are misleading here. dot-diag really is far worse than the other three options, and einsum isn't faster than the others either. – Nico Schlömer Jun 25 '18 at 17:16
  • @NicoSchlömer, my answer is nearly 5 yrs old, and many `numpy` versions back. – hpaulj Jun 25 '18 at 17:38
1

For those lost souls on google, using numpy.expand_dims then numpy.repeat will work, and will also work in higher dimensional cases (i.e. multiplying a shape (10, 12, 3) by a (10, 12)).

>>> import numpy
>>> a = numpy.array([[1,2,3],[4,5,6],[7,8,9]])
>>> b = numpy.array([0,1,2])
>>> b0 = numpy.expand_dims(b, axis = 0)
>>> b0 = numpy.repeat(b0, a.shape[0], axis = 0)
>>> b1 = numpy.expand_dims(b, axis = 1)
>>> b1 = numpy.repeat(b1, a.shape[1], axis = 1)
>>> a*b0
array([[ 0,  2,  6],
   [ 0,  5, 12],
   [ 0,  8, 18]])
>>> a*b1
array([[ 0,  0,  0],
   [ 4,  5,  6],
   [14, 16, 18]])
0

You need to transform row-array into column-array, which transpose doesn't do. Use reshape instead:

>>> import numpy as np
>>> a = np.array([[1,2,3],[4,5,6],[7,8,9]])
>>> b = np.array([0,1,2])
>>> a * b
array([[ 0,  2,  6],
       [ 0,  5, 12],
       [ 0,  8, 18]])

with reshape:

>>> a * b.reshape(-1,1)
array([[ 0,  0,  0],
       [ 4,  5,  6],
       [14, 16, 18]])
-4

Why don't you just do

>>> m = np.array([[1,2,3],[4,5,6],[7,8,9]])
>>> c = np.array([0,1,2])
>>> (m.T * c).T

??

Panos
  • 15