0

enter image description here

L array dimension is (d,a) ,B is (a,a,N) and R is (a,d). By multiplying these arrays I have to get an array size of (d,d,N). How could I implement this is PyTorch

roy
  • 1
  • 2

3 Answers3

1

A possible and straightforward approach is to apply torch.einsum (read more here):

>>> torch.einsum('ij,jkn,kl->iln', L, B, R)

Where j and k are the reduced dimensions of L and R respectively. And n is the "batch" dimension of B.

  • The first matrix multiplication will reduce L@B (let this intermediate result be o):

    ij,jkn->ikn
    
  • The second matrix multiplication will reduce o@R:

    ikn,kl->iln
    

Which overall sums up to the following form:

ij,jkn,kl->iln
Ivan
  • 34,531
  • 8
  • 55
  • 100
0

It seems like doing batch matrix multiplication. Like result[:,:, i]=L@B[:,:,i]@R. You can use:

B = B.permute([2,0,1])
result = torch.matmul(torch.matmul(L, B), R).permute([1,2,0])
hellohawaii
  • 3,074
  • 6
  • 21
0

N seems to be the batch dimension, let's forget it first.

It is simple chained matrix multiplication:

d, a = 3, 5

L = torch.randn(d, a)
B = torch.randn(a, a)
R = torch.randn(a, d)

L.matmul(B).shape  # (d, a)
L.matmul(B).matmul(R).shape  # (d, d)

Now let's add the batch dimension N. Everything is almost the same, but PyTorch works with batch dim first whereas your data is batch dim last, so a bit of movedim is required.

N = 7
B = torch.randn(a, a, N)

L.matmul(B.movedim(-1, 0)).shape  # (N, d, a)
L.matmul(B.movedim(-1, 0)).matmul(R).shape  # (N, d, d)
L.matmul(B.movedim(-1, 0)).matmul(R).movedim(0, -1).shape  # (d, d, N)
paime
  • 2,901
  • 1
  • 6
  • 17