442

What are the advantages and disadvantages of each?

From what I've seen, either one can work as a replacement for the other if need be, so should I bother using both or should I stick to just one of them?

Will the style of the program influence my choice? I am doing some machine learning using numpy, so there are indeed lots of matrices, but also lots of vectors (arrays).

Teun Zengerink
  • 4,277
  • 5
  • 30
  • 32
levesque
  • 8,756
  • 10
  • 36
  • 44
  • 4
    I don't have enough information to justify an answer but from what I can tell the main difference is the implementation of multiplication. A matrix performs matrix/tensor multiplication, whereas an array will do element-wise multiplication. – Mike Axiak Nov 11 '10 at 03:55
  • 11
    Python 3.5 added the infix @ operator for matrix multiplication (PEP 465), and NumPy 1.10 added support for it. So if you are using Python 3.5+ and NumPy 1.10+, then you can just write `A @ B` instead of `A.dot(B)`, where `A` and `B` are 2D `ndarray`s. This removes the main advantage of using `matrix` instead of plain `ndarray`s, IMHO. – MiniQuark Feb 29 '16 at 19:47

7 Answers7

476

Numpy matrices are strictly 2-dimensional, while numpy arrays (ndarrays) are N-dimensional. Matrix objects are a subclass of ndarray, so they inherit all the attributes and methods of ndarrays.

The main advantage of numpy matrices is that they provide a convenient notation for matrix multiplication: if a and b are matrices, then a*b is their matrix product.

import numpy as np

a = np.mat('4 3; 2 1')
b = np.mat('1 2; 3 4')
print(a)
# [[4 3]
#  [2 1]]
print(b)
# [[1 2]
#  [3 4]]
print(a*b)
# [[13 20]
#  [ 5  8]]

On the other hand, as of Python 3.5, NumPy supports infix matrix multiplication using the @ operator, so you can achieve the same convenience of matrix multiplication with ndarrays in Python >= 3.5.

import numpy as np

a = np.array([[4, 3], [2, 1]])
b = np.array([[1, 2], [3, 4]])
print(a@b)
# [[13 20]
#  [ 5  8]]

Both matrix objects and ndarrays have .T to return the transpose, but matrix objects also have .H for the conjugate transpose, and .I for the inverse.

In contrast, numpy arrays consistently abide by the rule that operations are applied element-wise (except for the new @ operator). Thus, if a and b are numpy arrays, then a*b is the array formed by multiplying the components element-wise:

c = np.array([[4, 3], [2, 1]])
d = np.array([[1, 2], [3, 4]])
print(c*d)
# [[4 6]
#  [6 4]]

To obtain the result of matrix multiplication, you use np.dot (or @ in Python >= 3.5, as shown above):

print(np.dot(c,d))
# [[13 20]
#  [ 5  8]]

The ** operator also behaves differently:

print(a**2)
# [[22 15]
#  [10  7]]
print(c**2)
# [[16  9]
#  [ 4  1]]

Since a is a matrix, a**2 returns the matrix product a*a. Since c is an ndarray, c**2 returns an ndarray with each component squared element-wise.

There are other technical differences between matrix objects and ndarrays (having to do with np.ravel, item selection and sequence behavior).

The main advantage of numpy arrays is that they are more general than 2-dimensional matrices. What happens when you want a 3-dimensional array? Then you have to use an ndarray, not a matrix object. Thus, learning to use matrix objects is more work -- you have to learn matrix object operations, and ndarray operations.

Writing a program that mixes both matrices and arrays makes your life difficult because you have to keep track of what type of object your variables are, lest multiplication return something you don't expect.

In contrast, if you stick solely with ndarrays, then you can do everything matrix objects can do, and more, except with slightly different functions/notation.

If you are willing to give up the visual appeal of NumPy matrix product notation (which can be achieved almost as elegantly with ndarrays in Python >= 3.5), then I think NumPy arrays are definitely the way to go.

PS. Of course, you really don't have to choose one at the expense of the other, since np.asmatrix and np.asarray allow you to convert one to the other (as long as the array is 2-dimensional).


There is a synopsis of the differences between NumPy arrays vs NumPy matrixes here.

smci
  • 32,567
  • 20
  • 113
  • 146
unutbu
  • 842,883
  • 184
  • 1,785
  • 1,677
  • 11
    For those wondering, `mat**n` for a matrix can be inelegantly applied to an array with `reduce(np.dot, [arr]*n)` – askewchan Apr 12 '13 at 20:42
  • 14
    Or just `np.linalg.matrix_power(mat, n)` – Eric Feb 28 '17 at 11:16
  • I'm wondering whether matrices would be faster... you'd think they have to perform less checks than ndarray. – PascalVKooten Sep 18 '17 at 20:23
  • 1
    Actually, timeit tests show ndarray operations such as `np.dot(array2, array2)` are faster than `matrix1*matrix2`. This makes sense because `matrix` is a subclass of ndarray which overrides special methods like `__mul__`. [`matrix.__mul__` calls `np.dot`](https://github.com/numpy/numpy/blob/master/numpy/matrixlib/defmatrix.py#L306). So there is code reusage here. Instead of performing fewer checks, using `matrix*matrix` requires an extra function call. So the advantage of using `matrix` is purely syntactic, not better performance. – unutbu Sep 18 '17 at 20:44
  • 4 * 1 + 3 * 3 giving you 13 when you did np.dot(c,d) isnt this actually called a cross product in math – PirateApp Apr 20 '18 at 07:07
  • Maybe you could update your answer, regarding "in numpy 1.15 calling np.matrix(...) emits a warning" https://github.com/numpy/numpy/issues/11135#issuecomment-400374970 – rudimeier Jun 26 '18 at 17:43
  • I found np.asarray(np.mat('a b; c d')) convenient when manually entering 2D arrays. – kjl Jan 01 '23 at 07:28
  • Absence of .H is frustrating for ndarrays.... Maybe it will get implemented? – Tunneller Jan 02 '23 at 00:03
103

Scipy.org recommends that you use arrays:

*'array' or 'matrix'? Which should I use? - Short answer

Use arrays.

  • They support multidimensional array algebra that is supported in MATLAB
  • They are the standard vector/matrix/tensor type of NumPy. Many NumPy functions return arrays, not matrices.
  • There is a clear distinction between element-wise operations and linear algebra operations.
  • You can have standard vectors or row/column vectors if you like.

Until Python 3.5 the only disadvantage of using the array type was that you had to use dot instead of * to multiply (reduce) two tensors (scalar product, matrix vector multiplication etc.). Since Python 3.5 you can use the matrix multiplication @ operator.

Given the above, we intend to deprecate matrix eventually.

TMBailey
  • 557
  • 3
  • 14
Lee
  • 29,398
  • 28
  • 117
  • 170
  • 11
    Even though the accepted answer provides more info, the real answer is indeed to stick with `ndarray`. The main argument for using `matrix` would be if your code is heavy in linear algebra and would look less clear with all the calls to the `dot` function. But this argument will disappear in future, now that the @-operator is accepted for use with matrix multiplication, see [PEP 465](https://www.python.org/dev/peps/pep-0465/). This will need Python 3.5 and the latest version of Numpy. The matrix class might be deprecated in the far future, so better to use ndarray for new code ... – Bas Swinckels Aug 10 '15 at 09:01
  • 7
    That page graciously forgets about `scipy.sparse` matrices. If you use both dense & sparse matrices in your code, it is much easier to stick to `matrix`. – David Nemeskey Apr 19 '16 at 15:14
  • 4
    In my opinion, the main disadvantage of arrays is that column slicing returns flat arrays which can be confusing and is mathematically not really sound. This also leads to the important disadvantage that numpy arrays cannot be treated in the same way as scipy.sparse matrices while numpy matrices basically can be exchanged freely with sparse matrices. Kind of absurd in this context that scipy recommends using arrays and then does not provide compatible sparse arrays. – Radio Controlled Dec 20 '17 at 10:52
33

Just to add one case to unutbu's list.

One of the biggest practical differences for me of numpy ndarrays compared to numpy matrices or matrix languages like matlab, is that the dimension is not preserved in reduce operations. Matrices are always 2d, while the mean of an array, for example, has one dimension less.

For example demean rows of a matrix or array:

with matrix

>>> m = np.mat([[1,2],[2,3]])
>>> m
matrix([[1, 2],
        [2, 3]])
>>> mm = m.mean(1)
>>> mm
matrix([[ 1.5],
        [ 2.5]])
>>> mm.shape
(2, 1)
>>> m - mm
matrix([[-0.5,  0.5],
        [-0.5,  0.5]])

with array

>>> a = np.array([[1,2],[2,3]])
>>> a
array([[1, 2],
       [2, 3]])
>>> am = a.mean(1)
>>> am.shape
(2,)
>>> am
array([ 1.5,  2.5])
>>> a - am #wrong
array([[-0.5, -0.5],
       [ 0.5,  0.5]])
>>> a - am[:, np.newaxis]  #right
array([[-0.5,  0.5],
       [-0.5,  0.5]])

I also think that mixing arrays and matrices gives rise to many "happy" debugging hours. However, scipy.sparse matrices are always matrices in terms of operators like multiplication.

Josef
  • 21,998
  • 3
  • 54
  • 67
32

As per the official documents, it's not anymore advisable to use matrix class since it will be removed in the future.

https://numpy.org/doc/stable/reference/generated/numpy.matrix.html

As other answers already state that you can achieve all the operations with NumPy arrays.

hashlash
  • 897
  • 8
  • 19
Aks
  • 515
  • 1
  • 5
  • 9
23

As others have mentioned, perhaps the main advantage of matrix was that it provided a convenient notation for matrix multiplication.

However, in Python 3.5 there is finally a dedicated infix operator for matrix multiplication: @.

With recent NumPy versions, it can be used with ndarrays:

A = numpy.ones((1, 3))
B = numpy.ones((3, 3))
A @ B

So nowadays, even more, when in doubt, you should stick to ndarray.

Peque
  • 13,638
  • 11
  • 69
  • 105
2

An advantage of using matrices is for easier instantiation through text rather than nested square brackets.

With matrices you can do

np.matrix("1, 1+1j, 0; 0, 1j, 0; 0, 0, 1")

and get the desired output directly:

matrix([[1.+0.j, 1.+1.j, 0.+0.j],
        [0.+0.j, 0.+1.j, 0.+0.j],
        [0.+0.j, 0.+0.j, 1.+0.j]])

If you use arrays, this does not work:

np.array("1, 1+1j, 0; 0, 1j, 0; 0, 0, 1")

output:

array('1, 1+1j, 0; 0, 1j, 0; 0, 0, 1', dtype='<U29')
Meowf
  • 65
  • 7
2

Matrix Operations with Numpy Arrays:

I would like to keep updating this answer about matrix operations with numpy arrays if some users are interested looking for information about matrices and numpy.

As the accepted answer, and the numpy-ref.pdf said:

class numpy.matrix will be removed in the future.

So now matrix algebra operations has to be done with Numpy Arrays.

a = np.array([[1,3],[-2,4]])
b = np.array([[3,-2],[5,6]]) 

Matrix Multiplication (infix matrix multiplication)

a@b
array([[18, 16],
       [14, 28]])

Transpose:

ab = a@b
ab.T       
array([[18, 14],
       [16, 28]])

  

Inverse of a matrix:

np.linalg.inv(ab)
array([[ 0.1       , -0.05714286],
       [-0.05      ,  0.06428571]])      

ab_i=np.linalg.inv(ab) 
ab@ab_i  # proof of inverse
array([[1., 0.],
       [0., 1.]]) # identity matrix 

Determinant of a matrix.

np.linalg.det(ab)
279.9999999999999

Solving a Linear System:

1.   x + y = 3,
    x + 2y = -8
b = np.array([3,-8])
a = np.array([[1,1], [1,2]])
x = np.linalg.solve(a,b)
x
array([ 14., -11.])
# Solution x=14, y=-11

Eigenvalues and Eigenvectors:

a = np.array([[10,-18], [6,-11]])
np.linalg.eig(a)
(array([ 1., -2.]), array([[0.89442719, 0.83205029],
        [0.4472136 , 0.5547002 ]])
rubengavidia0x
  • 501
  • 1
  • 5
  • 18