1

I want to do the tensor product of two arrays A and B, of size L x M, and reshape the result AB so that its size is L^2 x M^2. To me, the easiest way to do it is with this code


abj=0

do aj=1,M
do bj=1,M
 abj=abj+1   

 abi=0 

 do ai=1,L
 do bi=1,L
    abi=abi+1

    AB(abi,abj)=A(ai,aj)*B(bi,bj) 

 end do
 end do
end do
end do

Is there a more efficient way to accomplish the same task?

EDIT

Two clarifications

  1. Using BLAS/LAPACK is totally fine

  2. Typically, L and M are of the order of hundreds

Gippo
  • 65
  • 5
  • See [this question](https://stackoverflow.com/questions/40104962/efficient-way-to-perform-tensor-products-in-fortran) for some library suggestions. – veryreverie Jan 18 '22 at 15:21
  • 2
    My first concern is that a 300x300x300x300 array requires approximately 60 GBytes of memory - If you don't have enough memory you face paging to disk regularly, in which case the efficiency of the compute will be a secondary factor. – Ian Bush Jan 18 '22 at 15:52
  • @IanBush I see. But I guess I cannot do much for this. The aim is to make a coordinate transformation. To be more precise: I have an array M(:,:), I need to make a change of basis, through a basis set v(:,:), for both the coordinates. Instead of doing the transformation for the two entries separately, I am reshaping M and v x v to make the change of coordinates in one step with a single matrix multiplication (also because M(:,:) is already reshaped from the previous steps). Don't know if I have explained myself. – Gippo Jan 18 '22 at 16:01
  • 1
    If you have access to a cluster 60 GByte, or even 1TByte, is not a lot of memory, though you will need to start considering distributed memory solutions (which could be fairly straightforward if the above is truly representative of the problem). But ultimately the memory scaling with systems size of this approach is **horrible**, and I would spend a good amount of time thinking about how I can avoid it, rather than how I can do it most efficiently. – Ian Bush Jan 18 '22 at 16:06
  • 1
    The answer to "how do I calculate the tensor product of two ~300*300 matrices" is "don't, if you can avoid it". If you explain a little more about your problem, maybe we can find a way around this? – veryreverie Jan 18 '22 at 16:07
  • OK, I will try to write more about the general problem. Just a fast question. From the point of view of the change of coordinates described above, is it better to make the change of coordinates in two steps with v dot (M dot v) or to do a reshape and do a single M dot ( v x v )? – Gippo Jan 18 '22 at 16:18
  • 1
    I'm a bit lost with what you're trying to do. Could you ask this question as a new Stack Overflow question rather than in the comments? :) – veryreverie Jan 18 '22 at 20:00
  • I have asked a new question – Gippo Jan 25 '22 at 01:22

1 Answers1

0

I'd be inclined to first create an L*L*M*M array, and then to reshape that, something like

integer, parameter :: l = 10
integer, parameter :: m = 20

integer :: a(l,m)
integer :: b(l,m)
integer :: ab(l*l,m*m)

integer :: intermediate(l,l,m,m)

integer :: i,j

do i=1,m
  do j=1,l
    intermediate(:,j,:,i) = a*b(j,i)
  enddo
enddo

ab = reshape(intermediate, [l*l, m*m])

This should run a little faster than the naive quadruple loop, as the matrix*scalar multiplication can be optimised (or indeed performed using BLAS/LAPACK if desired).

The representation in memory of ab and intermediate is identical, so you might want to use associate to avoid the reshape and copy, e.g.

integer, parameter :: l = 10
integer, parameter :: m = 20

integer :: a(l,m)
integer :: b(l,m)
integer :: ab(l*l,m*m)

integer :: i,j,x,y

do i=1,m
  x = m*(i-1)
  do j=1,l
    y = l*(j-1)
    associate(temp => ab(y+1:y+l, x+1:x+m))
     temp = a*b(j,i)
    end associate
  enddo
enddo
veryreverie
  • 2,871
  • 2
  • 13
  • 26