16

The code below performs the operation the same operation on gpuArrays a and b in two different ways. The first part computes (a'*(a*b)')' , while the second part computes a*b*a. The results are then verified to be the same.

%function test
clear
rng('default');rng(1);
a=sprand(3000,3000,0.1);
b=rand(3000,3000);
a=gpuArray(a);
b=gpuArray(b);
tic;
c1=gather(transpose(transpose(a)*transpose(a*b)));
disp(['time for (a''*(a*b)'')'': ' , num2str(toc),'s'])

clearvars -except c1

rng('default');
rng(1)
a=sprand(3000,3000,0.1);
b=rand(3000,3000);
a=gpuArray(a);
b=gpuArray(b);
tic;
c2=gather(a*b*a);
disp(['time for a*b*a: ' , num2str(toc),'s'])

disp(['error = ',num2str(max(max(abs(c1-c2))))])

%end

However, computing (a'*(a*b)')' is roughly 4 times faster than computing a*b*a. Here is the output of the above script in R2018a on an Nvidia K20 (I've tried different versions and different GPUs with the similar behaviour).

>> test
time for (a'*(a*b)')': 0.43234s
time for a*b*a: 1.7175s
error = 2.0009e-11

Even more strangely, if the first and last lines of the above script are uncommented (to turn it into a function), then both take the longer amount of time (~1.7s instead of ~0.4s). Below is the output for this case:

>> test
time for (a'*(a*b)')': 1.717s
time for a*b*a: 1.7153s
error = 1.0914e-11

I'd like to know what is causing this behaviour, and how to perform a*b*a or (a'*(a*b)')' or both in the shorter amount of time (i.e. ~0.4s rather than ~1.7s) inside a matlab function rather than inside a script.

avgn
  • 982
  • 6
  • 19
  • Just to be 100% sure, run both tests 100 times and compute the average run time. Also, what are the first and last lines? The function? – Ander Biguri May 01 '18 at 10:06
  • @PatrickRoberts the matrices are identical for both operations as I am using the same seed for the random number generator (You can see that the final answer is also identical). – avgn May 01 '18 at 17:35
  • 1
    @AnderBiguri I had tried that and the same behaviour persists. But I posted the simpler version above above to make the code simple and readable. – avgn May 01 '18 at 17:36

3 Answers3

6

There seem to be an issue with multiplication of two sparse matrices on GPU. time for sparse by full matrix is more than 1000 times faster than sparse by sparse. A simple example:

str={'sparse*sparse','sparse*full'};
for ii=1:2
    rng(1);
    a=sprand(3000,3000,0.1);
    b=sprand(3000,3000,0.1);
    if ii==2
        b=full(b);
    end
    a=gpuArray(a);
    b=gpuArray(b);
    tic
    c=a*b;
    disp(['time for ',str{ii},': ' , num2str(toc),'s'])
end

In your context, it is the last multiplication which does it. to demonstrate I replace a with a duplicate c, and multiply by it twice, once as sparse and once as full matrix.

str={'a*b*a','a*b*full(a)'};
for ii=1:2
    %rng('default');
    rng(1)
    a=sprand(3000,3000,0.1);
    b=rand(3000,3000);
    rng(1)
    c=sprand(3000,3000,0.1);
    if ii==2
        c=full(c);
    end
    a=gpuArray(a);
    b=gpuArray(b);
    c=gpuArray(c);
    tic;
    c1{ii}=a*b*c;
    disp(['time for ',str{ii},': ' , num2str(toc),'s'])
end
disp(['error = ',num2str(max(max(abs(c1{1}-c1{2}))))])

I may be wrong, but my conclusion is that a * b * a involves multiplication of two sparse matrices (a and a again) and is not treated well, while using transpose() approach divides the process to two stage multiplication, in none of which there are two sparse matrices.

Yuval Harpaz
  • 1,416
  • 1
  • 12
  • 16
  • 3
    Good answer! Note that for proper timing you'd either loop a thousand times and average over it, or use [`timeit()`](http://mathworks.com/help/matlab/ref/timeit.html) to do this in one go. Doesn't really matter for differences of a 1000 times, but e.g. the graph [here](https://stackoverflow.com/a/32146700/5211833) shows you that individual time tests can vary quite a bit. – Adriaan May 07 '18 at 07:29
  • 1
    Thanks! And good guess! However, I believe sparse-sparse routines are not optimized for GPU, since they are cannot be efficiently threaded. This may explain your observation. Based on my phone calls with Mathworks, I think the difference in performance is related to only one of the row-major and col-major sparse-dense being threaded (but not both). @Ander Biguri touched on this also in his answer. – avgn May 07 '18 at 22:43
3

EDIT 2 I might have been right, see this other answer

EDIT: They use MAGMA, which is column major. My answer does not hold, however I will leave it here for a while in case it can help crack this strange behavior.

The below answer is wrong

This is my guess, I can not 100% tell you without knowing the code under MATLAB's hood.


Hypothesis: MATLABs parallel computing code uses CUDA libraries, not their own.

Important information

  • MATLAB is column major and CUDA is row major.
  • There is no such things as 2D matrices, only 1D matrices with 2 indices

Why does this matter? Well because CUDA is highly optimized code that uses memory structure to maximize cache hits per kernel (the slowest operation on GPUs is reading memory). This means a standard CUDA matrix multiplication code will exploit the order of memory reads to make sure they are adjacent. However, what is adjacent memory in row-major is not in column-major.

So, there are 2 solutions to this as someone writing software

  1. Write your own column-major algebra libraries in CUDA
  2. Take every input/output from MATLAB and transpose it (i.e. convert from column-major to row major)

They have done point 2, and assuming that there is a smart JIT compiler for MATLAB parallel processing toolbox (reasonable assumption), for the second case, it takes a and b, transposes them, does the maths, and transposes the output when you gather.

In the first case however, you already do not need to transpose the output, as it is internally already transposed and the JIT catches this, so instead of calling gather(transpose( XX )) it just skips the output transposition is side. The same with transpose(a*b). Note that transpose(a*b)=transpose(b)*transpose(a), so suddenly no transposes are needed (they are all internally skipped). A transposition is a costly operation.

Indeed there is a weird thing here: making the code a function suddenly makes it slow. My best guess is that because the JIT behaves differently in different situations, it doesn't catch all this transpose stuff inside and just does all the operations anyway, losing the speed up.


Interesting observation: It takes the same time in CPU than GPU to do a*b*a in my PC.

Community
  • 1
  • 1
Ander Biguri
  • 35,140
  • 11
  • 74
  • 120
  • 1
    Thanks for your response. Based on my conversations with mathworks, they are using CUDA libraries now. Just posting this here so others can know. – avgn May 07 '18 at 22:45
3

I got in touch with Mathworks tech support and Rylan finally shed some light on this issue. (Thanks Rylan!) His full response is below. The function vs script issue appears to be related to certain optimizations matlab applies automatically to functions (but not scripts) not working as expected.

Rylan's response:

Thank you for your patience on this issue. I have consulted with the MATLAB GPU computing developers to understand this better.

This issue is caused by internal optimizations done by MATLAB when encountering some specific operations like matrix-matrix multiplication and transpose. Some of these optimizations may be enabled specifically when executing a MATLAB function (or anonymous function) rather than a script.

When your initial code was being executed from a script, a particular matrix transpose optimization is not performed, which results in the 'res2' expression being faster than the 'res1' expression:

  n = 2000;
  a=gpuArray(sprand(n,n,0.01)); 
  b=gpuArray(rand(n));

  tic;res1=a*b*a;wait(gpuDevice);toc                                         % Elapsed time is 0.884099 seconds.
  tic;res2=transpose(transpose(a)*transpose(a*b));wait(gpuDevice);toc        % Elapsed time is 0.068855 seconds.

However when the above code is placed in a MATLAB function file, an additional matrix transpose-times optimization is done which causes the 'res2' expression to go through a different code path (and different CUDA library function call) compared to the same line being called from a script. Therefore this optimization generates slower results for the 'res2' line when called from a function file.

To avoid this issue from occurring in a function file, the transpose and multiply operations would need to be split in a manner that stops MATLAB from applying this optimization. Separating each clause within the 'res2' statement seems to be sufficient for this:

  tic;i1=transpose(a);i2=transpose(a*b);res3=transpose(i1*i2);wait(gpuDevice);toc      % Elapsed time is 0.066446 seconds.

In the above line, 'res3' is being generated from two intermediate matrices: 'i1' and 'i2'. The performance (on my system) seems to be on par with that of the 'res2' expression when executed from a script; in addition the 'res3' expression also shows similar performance when executed from a MATLAB function file. Note however that additional memory may be used to store the transposed copy of the initial array. Please let me know if you see different performance behavior on your system, and I can investigate this further.

Additionally, the 'res3' operation shows faster performance when measured with the 'gputimeit' function too. Please refer to the attached 'testscript2.m' file for more information on this. I have also attached 'test_v2.m' which is a modification of the 'test.m' function in your Stack Overflow post.

Thank you for reporting this issue to me. I would like to apologize for any inconvenience caused by this issue. I have created an internal bug report to notify the MATLAB developers about this behavior. They may provide a fix for this in a future release of MATLAB.

Since you had an additional question about comparing the performance of GPU code using 'gputimeit' vs. using 'tic' and 'toc', I just wanted to provide one suggestion which the MATLAB GPU computing developers had mentioned earlier. It is generally good to also call 'wait(gpuDevice)' before the 'tic' statements to ensure that GPU operations from the previous lines don't overlap in the measurement for the next line. For example, in the following lines:

  b=gpuArray(rand(n));
  tic; res1=a*b*a; wait(gpuDevice); toc  

if the 'wait(gpuDevice)' is not called before the 'tic', some of the time taken to construct the 'b' array from the previous line may overlap and get counted in the time taken to execute the 'res1' expression. This would be preferred instead:

  b=gpuArray(rand(n));
  wait(gpuDevice); tic; res1=a*b*a; wait(gpuDevice); toc  

Apart from this, I am not seeing any specific issues in the way that you are using the 'tic' and 'toc' functions. However note that using 'gputimeit' is generally recommended over using 'tic' and 'toc' directly for GPU-related profiling.

I will go ahead and close this case for now, but please let me know if you have any further questions about this.

%testscript2.m
n = 2000;
a = gpuArray(sprand(n, n, 0.01)); 
b = gpuArray(rand(n)); 

gputimeit(@()transpose_mult_fun(a, b))
gputimeit(@()transpose_mult_fun_2(a, b))

function out = transpose_mult_fun(in1, in2)

i1 = transpose(in1);
i2 = transpose(in1*in2);

out = transpose(i1*i2);

end

function out = transpose_mult_fun_2(in1, in2)

out = transpose(transpose(in1)*transpose(in1*in2));

end

.

function test_v2

clear

%% transposed expression
n = 2000;
rng('default');rng(1);
a = sprand(n, n, 0.1);
b = rand(n, n);
a = gpuArray(a);
b = gpuArray(b);

tic;
c1 = gather(transpose( transpose(a) * transpose(a * b) ));

disp(['time for (a''*(a*b)'')'': ' , num2str(toc),'s'])

clearvars -except c1

%% non-transposed expression
rng('default');
rng(1)
n = 2000;
a = sprand(n, n, 0.1);
b = rand(n, n);
a = gpuArray(a);
b = gpuArray(b);

tic;
c2 = gather(a * b * a);

disp(['time for a*b*a: ' , num2str(toc),'s'])
disp(['error = ',num2str(max(max(abs(c1-c2))))])

%% sliced equivalent
rng('default');
rng(1)
n = 2000;
a = sprand(n, n, 0.1);
b = rand(n, n);
a = gpuArray(a);
b = gpuArray(b);

tic;
intermediate1 = transpose(a);
intermediate2 = transpose(a * b);
c3 = gather(transpose( intermediate1 * intermediate2 ));

disp(['time for split equivalent: ' , num2str(toc),'s'])
disp(['error = ',num2str(max(max(abs(c1-c3))))])

end
avgn
  • 982
  • 6
  • 19