3

I would like to optimize this piece of Matlab code but so far I have failed. I have tried different combinations of repmat and sums and cumsums, but all my attempts seem to not give the correct result. I would appreciate some expert guidance on this tough problem.

S=1000; T=10;
X=rand(T,S),
X=sort(X,1,'ascend');
Result=zeros(S,1);
for c=1:T-1
    for cc=c+1:T
        d=(X(cc,:)-X(c,:))-(cc-c)/T;
        Result=Result+abs(d');
    end
end

Basically I create 1000 vectors of 10 random numbers, and for each vector I calculate for each pair of values (say the mth and the nth) the difference between them, minus the difference (n-m). I sum over of possible pairs and I return the result for every vector.

I hope this explanation is clear,

Thanks a lot in advance.

SebDL
  • 200
  • 1
  • 7
  • You can regroup in terms of `X(j,:) - j/T`. Two terms of this form appear inside your expression for `d`. By doing the subtraction once outside the loop, you can save a much larger number of repeated subtractions inside. – Ben Voigt Jan 12 '18 at 05:52

3 Answers3

4

The nchoosek(v,k) function generates all combinations of the elements in v taken k at a time. We can use this to generate all possible pairs of indicies then use this to vectorize the loops. It appears that in this case the vectorization doesn't actually improve performance (at least on my machine with 2017a). Maybe someone will come up with a more efficient approach.

idx = nchoosek(1:T,2);
d = bsxfun(@minus,(X(idx(:,2),:) - X(idx(:,1),:)), (idx(:,2)-idx(:,1))/T);
Result = sum(abs(d),1)';
Wolfie
  • 27,562
  • 7
  • 28
  • 55
jodag
  • 19,885
  • 5
  • 47
  • 66
  • 1
    I assume it doesn't improve performance because `nchoosek` is slow (especially for even slightly large inputs) and then `bsxfun` is convenient but not necessarily fast, you might find [this question](https://stackoverflow.com/questions/12951453/in-matlab-when-is-it-optimal-to-use-bsxfun) interesting, about best times to use `bsxfun`. Having said all that, this is still a neat solution. – Wolfie Jan 12 '18 at 08:28
4

It is at least easy to vectorize your inner loop:

Result=zeros(S,1);
for c=1:T-1
   d=(X(c+1:T,:)-X(c,:))-((c+1:T)'-c)./T;
   Result=Result+sum(abs(d),1)';
end

Here, I'm using the new automatic singleton expansion. If you have an older version of MATLAB you'll need to use bsxfun for two of the subtraction operations. For example, X(c+1:T,:)-X(c,:) is the same as bsxfun(@minus,X(c+1:T,:),X(c,:)).

What is happening in the bit of code is that instead of looping cc=c+1:T, we take all of those indices at once. So I simply replaced cc for c+1:T. d is then a matrix with multiple rows (9 in the first iteration, and one fewer in each subsequent iteration).

Surprisingly, this is slower than the double loop, and similar in speed to Jodag's answer.

Next, we can try to improve indexing. Note that the code above extracts data row-wise from the matrix. MATLAB stores data column-wise. So it's more efficient to extract a column than a row from a matrix. Let's transpose X:

X=X';
Result=zeros(S,1);
for c=1:T-1
   d=(X(:,c+1:T)-X(:,c))-((c+1:T)-c)./T;
   Result=Result+sum(abs(d),2);
end

This is more than twice as fast as the code that indexes row-wise.

But of course the same trick can be applied to the code in the question, speeding it up by about 50%:

X=X';
Result=zeros(S,1);
for c=1:T-1
   for cc=c+1:T
      d=(X(:,cc)-X(:,c))-(cc-c)/T;
      Result=Result+abs(d);
   end
end

My takeaway message from this exercise is that MATLAB's JIT compiler has improved things a lot. Back in the day any sort of loop would halt code to a grind. Today it's not necessarily the worst approach, especially if all you do is use built-in functions.

Cris Luengo
  • 55,762
  • 10
  • 62
  • 120
  • 1
    Yup, as you say, MATLAB's JIT is good enough nowadays, it will vectorize the code without you needed the effort. They are making us lazy :P – Ander Biguri Jan 12 '18 at 09:54
  • Thanks a lot for the help. I have tried the proposed changes and I updated the question with the results. – SebDL Jan 12 '18 at 10:47
4

Update: here are the results for the running times for the different proposals (10^5 trials): enter image description here

So it looks like the transformation of the matrix is the most efficient intervention, and my original double-loop implementation is, amazingly, the best compared to the vectorized versions. However, in my hands (2017a) the improvement is only 16.6% compared to the original using the mean (18.2% using the median).

Maybe there is still room for improvement?

SebDL
  • 200
  • 1
  • 7
  • I have a hard time deciding 2^(-7.8), why not plot on a linear axis? In my experiments the 2nd and 3rd methods were about equal, and the last two were both about half the time of the methods they are based on. Interesting to see how these timings change from machine to machine. Maybe also because I used a larger array than you had in the question? I bet timing differences depend a lot on data size! Anyway, thanks for posting this summary! – Cris Luengo Jan 13 '18 at 02:54