9

After reading the pytorch documentation, I still require help in understanding the difference between torch.mm, torch.matmul and torch.mul. As I do not fully understand them, I cannot concisely explain this.

B = torch.tensor([[ 1.1207],
        [-0.3137],
        [ 0.0700],
        [ 0.8378]])

C = torch.tensor([[ 0.5146,  0.1216, -0.5244,  2.2382]])

print(torch.mul(B,C))

print(torch.matmul(B,C))

print(torch.mm(B,C))

All three produce the following output (i.e. they perform matrix multiplication):

tensor([[ 0.5767,  0.1363, -0.5877,  2.5084],
        [-0.1614, -0.0381,  0.1645, -0.7021],
        [ 0.0360,  0.0085, -0.0367,  0.1567],
        [ 0.4311,  0.1019, -0.4393,  1.8752]])
A = torch.tensor([[1.8351,2.1536], [-0.8320,-1.4578]])
B = torch.tensor([[2.9355, 0.3450], [0.5708, 1.9957]])
print(torch.mul(A,B))
print(torch.matmul(A,B))
print(torch.mm(A,B))

Different outputs are produced. torch.mm no longer performs matrix multiplication (broadcasts and performs element-wise multiplication instead, whilst the other two still perform matrix multiplication.

tensor([[ 5.3869,  0.7430],
        [-0.4749, -2.9093]])
tensor([[ 6.6162,  4.9310],
        [-3.2744, -3.1964]])
tensor([[ 6.6162,  4.9310],
        [-3.2744, -3.1964]])

Inputs

tensor1 = torch.randn(10, 3, 4)
tensor2 = torch.randn(4)

tensor1 = 
tensor([[[-0.2267,  0.6311, -0.5689,  1.2712],
         [-0.0241, -0.5362,  0.5481, -0.4534],
         [-0.9773, -0.6842,  0.6927,  0.3363]],

        [[-2.6759,  0.7817,  2.6821,  0.7037],
         [ 0.1804,  0.3938, -1.2235,  0.8729],
         [-1.9873, -0.5030,  0.0945,  0.2688]],

        [[ 0.4244,  1.7350,  0.0558, -0.1861],
         [-0.9063, -0.4737, -0.4284, -0.3883],
         [ 0.4827, -0.2628,  1.0084,  0.2769]],

        [[ 0.2939,  0.4604,  0.8014, -1.8760],
         [ 1.8807,  0.1623,  0.2344, -0.6221],
         [ 1.3964,  3.1637,  0.7889,  0.1195]],

        [[-0.7202,  1.4250,  2.4302,  1.4811],
         [-0.2301,  0.6280,  0.5379,  0.5178],
         [-2.1073, -1.4399, -0.9451,  0.8534]],

        [[ 2.8178, -0.4451, -0.7871, -0.5198],
         [ 0.2825,  1.0692,  0.1559,  1.2945],
         [-0.5828, -1.6287, -2.0661, -0.4107]],

        [[ 0.5077, -0.6349, -0.0160, -0.4477],
         [-0.8070,  0.3746,  1.1852,  0.0351],
         [-0.6454,  1.5877,  0.8561,  1.1021]],

        [[ 0.1191,  1.0116,  0.5807,  1.2105],
         [-0.5403,  1.2404,  1.1532,  0.6537],
         [ 1.4757, -1.3648, -1.7158, -1.0289]],

        [[-0.1326,  0.3715,  0.2429, -0.0794],
         [ 0.3224, -0.3064,  0.1963,  0.7276],
         [ 0.9098,  1.5984, -1.4953,  0.0420]],

        [[ 0.1511,  0.9691, -0.5204,  0.3858],
         [ 0.4566,  1.5482, -0.3401,  0.5960],
         [-0.9998,  0.7198,  0.9286,  0.4498]]])

tensor2 =
tensor([-1.6350,  1.0335, -0.9023,  0.0696])
print(torch.mul(tensor1,tensor2))
print(torch.matmul(tensor1,tensor2))
print(torch.mm(tensor1,tensor2))

Outputs are all different. I think torch.mul broadcasts and multiplies every 4 elements of the matrix by the vector, tensor2, i.e. [-0.2267, 0.6311, -0.5689, 1.2712] x tensor 2 element-wise, [-0.0241, -0.5362, 0.5481, -0.4534] x tensor 2 element-wise and so on. I do not understand what torch.matmul is doing. I think it is to do with the 5th bullet-point of the documentation (If both arugments...), but I am unable to make sense of this. https://pytorch.org/docs/stable/generated/torch.matmul.html

I think the reason torch.mm is unable to produce an output is the fact that it cannot broadcast (please correct me if I'm wrong).

tensor([[[ 3.7071e-01,  6.5221e-01,  5.1335e-01,  8.8437e-02],
         [ 3.9400e-02, -5.5417e-01, -4.9460e-01, -3.1539e-02],
         [ 1.5979e+00, -7.0715e-01, -6.2499e-01,  2.3398e-02]],

        [[ 4.3752e+00,  8.0790e-01, -2.4201e+00,  4.8957e-02],
         [-2.9503e-01,  4.0699e-01,  1.1040e+00,  6.0723e-02],
         [ 3.2494e+00, -5.1981e-01, -8.5253e-02,  1.8701e-02]],

        [[-6.9397e-01,  1.7931e+00, -5.0379e-02, -1.2945e-02],
         [ 1.4818e+00, -4.8954e-01,  3.8657e-01, -2.7010e-02],
         [-7.8920e-01, -2.7163e-01, -9.0992e-01,  1.9265e-02]],

        [[-4.8055e-01,  4.7582e-01, -7.2309e-01, -1.3051e-01],
         [-3.0750e+00,  1.6770e-01, -2.1146e-01, -4.3281e-02],
         [-2.2832e+00,  3.2697e+00, -7.1183e-01,  8.3139e-03]],

        [[ 1.1775e+00,  1.4727e+00, -2.1928e+00,  1.0304e-01],
         [ 3.7617e-01,  6.4900e-01, -4.8534e-01,  3.6025e-02],
         [ 3.4455e+00, -1.4882e+00,  8.5277e-01,  5.9369e-02]],

        [[-4.6072e+00, -4.6005e-01,  7.1024e-01, -3.6160e-02],
         [-4.6191e-01,  1.1051e+00, -1.4067e-01,  9.0053e-02],
         [ 9.5283e-01, -1.6833e+00,  1.8643e+00, -2.8571e-02]],

        [[-8.3005e-01, -6.5622e-01,  1.4461e-02, -3.1148e-02],
         [ 1.3195e+00,  3.8716e-01, -1.0694e+00,  2.4421e-03],
         [ 1.0553e+00,  1.6409e+00, -7.7250e-01,  7.6669e-02]],

        [[-1.9477e-01,  1.0455e+00, -5.2398e-01,  8.4209e-02],
         [ 8.8343e-01,  1.2820e+00, -1.0405e+00,  4.5478e-02],
         [-2.4128e+00, -1.4106e+00,  1.5482e+00, -7.1578e-02]],

        [[ 2.1675e-01,  3.8391e-01, -2.1914e-01, -5.5219e-03],
         [-5.2707e-01, -3.1668e-01, -1.7711e-01,  5.0619e-02],
         [-1.4876e+00,  1.6520e+00,  1.3493e+00,  2.9198e-03]],

        [[-2.4706e-01,  1.0015e+00,  4.6955e-01,  2.6842e-02],
         [-7.4663e-01,  1.6001e+00,  3.0685e-01,  4.1462e-02],
         [ 1.6347e+00,  7.4395e-01, -8.3792e-01,  3.1291e-02]]])
tensor([[ 1.6247, -1.0409,  0.2891],
        [ 2.8120,  1.2767,  2.6630],
        [ 1.0358,  1.3518, -1.9515],
        [-0.8583, -3.1620,  0.2830],
        [ 0.5605,  0.5759,  2.8694],
        [-4.3932,  0.5925,  1.1053],
        [-1.5030,  0.6397,  2.0004],
        [ 0.4109,  1.1704, -2.3467],
        [ 0.3760, -0.9702,  1.5165],
        [ 1.2509,  1.2018,  1.5720]])
AIBball
  • 101
  • 1
  • 1
  • 5

2 Answers2

15

In short:

  • torch.mm - performs a matrix multiplication without broadcasting - (2D tensor) by (2D tensor)
  • torch.mul - performs a elementwise multiplication with broadcasting - (Tensor) by (Tensor or Number)
  • torch.matmul - matrix product with broadcasting - (Tensor) by (Tensor) with different behaviors depending on the tensor shapes (dot product, matrix product, batched matrix products).

Some details:

  1. torch.mm - performs a matrix multiplication without broadcasting

It expects two 2D tensors so n×m * m×p = n×p

From the documentation https://pytorch.org/docs/stable/generated/torch.mm.html:

This function does not broadcast. For broadcasting matrix products, see torch.matmul().
  1. torch.mul - performs a elementwise multiplication with broadcasting - (Tensor) by (Tensor or Number)

Docs: https://pytorch.org/docs/stable/generated/torch.mul.html

torch.mul does not perform a matrix multiplication. It broadcasts two tensors and performs an elementwise multiplication. So when you uses it with tensors 1x4 * 4x1 it will work similar to:

import torch

a = torch.FloatTensor([[1], [2], [3]])
b = torch.FloatTensor([[1, 10, 100]])
a, b = torch.broadcast_tensors(a, b)
print(a)
print(b)
print(a * b)
tensor([[1., 1., 1.],
        [2., 2., 2.],
        [3., 3., 3.]])
tensor([[  1.,  10., 100.],
        [  1.,  10., 100.],
        [  1.,  10., 100.]])
tensor([[  1.,  10., 100.],
        [  2.,  20., 200.],
        [  3.,  30., 300.]])
  1. torch.matmul

It is better to check out the official documentation https://pytorch.org/docs/stable/generated/torch.matmul.html as it uses different modes depending on the input tensors. It may perform dot product, matrix-matrix product or batched matrix products with broadcasting.

As for your question regarding product of:

tensor1 = torch.randn(10, 3, 4)
tensor2 = torch.randn(4)

it is a batched version of a product. please check this simple example for understanding:

import torch

# 3x1x3
a = torch.FloatTensor([[[1, 2, 3]], [[3, 4, 5]], [[6, 7, 8]]])
# 3
b = torch.FloatTensor([1, 10, 100])
r1 = torch.matmul(a, b)

r2 = torch.stack((
    torch.matmul(a[0], b),
    torch.matmul(a[1], b),
    torch.matmul(a[2], b),
))
assert torch.allclose(r1, r2)

So it can be seen as a multiple operations stacked together across batch dimension.

Also it may be useful to read about broadcasting:

https://pytorch.org/docs/stable/notes/broadcasting.html#broadcasting-semantics

u1234x1234
  • 2,062
  • 1
  • 1
  • 8
  • Nice answer! And I want to add the introduction of `torch.bmm`, which is batch matrix-matrix product. shape: (b×n×m),(b×m×p) -->(b×n×p) Performs a batch matrix-matrix product of matrices stored in input and mat2. input and mat2 must be 3-D tensors each containing the same number of matrices. – Hong Cheng Jul 17 '23 at 14:36
0

I want to add the introduction of torch.bmm, which is batch matrix-matrix product.

torch.bmm(input,mat2,*,out=None)→Tensor

shape: (b×n×m),(b×m×p) -->(b×n×p)

Performs a batch matrix-matrix product of matrices stored in input and mat2. input and mat2 must be 3-D tensors each containing the same number of matrices.

This function does not broadcast.

Example

input = torch.randn(10, 3, 4)
mat2 = torch.randn(10, 4, 5)
res = torch.bmm(input, mat2)
res.size()  # torch.Size([10, 3, 5])
Hong Cheng
  • 318
  • 1
  • 4
  • 19