5

enter image description hereI have two images, say P and S, of size 8192×200, and I want to calculate a custom "Euclidean distance" between them. Currently I use the following steps:

  1. Reshape the images into a pair of column and row vectors:

    Ip = Ip(:).';
    Is = Is(:);
    
  2. Calculate a metric matrix, G, whose entries are given by the formula

    G(i,j) = 1/(2*pi*r*r) * exp((-d*d)/(2*r*r));
    

    where r is a global parameter that varies from 0 to 20, say, and d is the distance between pixel i and pixel j. E.g., if pixel i is (k,l) and pixel j is (k1,l1), then d = sqrt((k-k1)^2 + (l-l1)^2);. Pixel 1 will be (1,1), Pixel 2 will be (1,2), and so on. Therefore, the size of matrix G will be 1638400×1638400.

  3. Compute the final (scalar) Euclidean distance between two images, using:

    ImEuDist = sqrt( (Ip-Is) * G * (Ip-Is).' );  
    

I have already written some code using a mex function, but it is taking too long before giving the results (5-6 Hours) - see this SO question for code and more discussion on this.

Please help me to optimize this; I would ideally like it to run in a matter of seconds. Note that I am not interested in solutions involving the GPU.

Community
  • 1
  • 1
nagarwal
  • 87
  • 1
  • 1
  • 7
  • I still think you should give a diagram..just make something simple in paint – dan-man May 02 '16 at 15:17
  • Thanks for the edit but in the first step I am reshaping both the images as either row or column vector simultaneously, not into a pair of row and column vectors. May I know your idea behind it? – nagarwal May 03 '16 at 07:24
  • Let me know if you still need an image. I can put a simple one. – nagarwal May 03 '16 at 07:26
  • An image would be good, yes. I think your first step is not the right approach, given what you are trying to do. My edit does change it slightly (you flipped both images before converting to vectors) but that's not particularly intersting/relevant to solving the whole problem. I'm not sure what you mean by "simultaneously" in your comment. – dan-man May 03 '16 at 07:42
  • By simultaneously, I mean that both `Ip` and `Is` are either row or column vectors. It excludes the cases that `Ip` is row vector and `Is` is column vector and vice versa. – nagarwal May 03 '16 at 08:21
  • no. what you wrote originally had one as a row and one as a column. But, I wouldn't focus on that because I think it's actually irrelevant to solving the problem. – dan-man May 03 '16 at 08:24
  • Yeah..exactly...!! – nagarwal May 03 '16 at 09:46

1 Answers1

3

If I've understood correctly, you should be able to do the following, and have it run in under 2s:

sample data:

s1 = 8192; s2 = 200;
img_a = rand(s1, s2);
img_b = rand(s1, s2);
r = 2;

and the calculation itself:

img_diff = img_a - img_b;
kernel = bsxfun(@plus, (-s1:s1).^2.', (-s2:s2).^2);
kernel = 1/(2/pi/r^2) * exp(-kernel/ (2*r*2));
g = conv2(img_diff, kernel, 'same');
res = g(:)' * img_diff(:); 
res = sqrt(res);

The above takes about 25s. To get down to 2s, you need to replace the standard conv2 with a faster, fft based convolution. See this and this:

function c = conv2fft(X, Y)
    % ignoring small floating-point differences, this is equivalent
    % to the inbuilt Matlab conv2(X, Y, 'same')
    X1 = [X zeros(size(X,1), size(Y,2)-1);
          zeros(size(Y,1)-1, size(X,2)+size(Y,2)-1)];
    Y1 = zeros(size(X1)); 
    Y1(1:size(Y,1), 1:size(Y,2)) = Y;
    c = ifft2(fft2(X1).*fft2(Y1));
    c = c(size(X,1)+1:size(X,1)+size(X,1), size(X,2)+1:size(X,2)+size(X,2));
end

Incidentally, if you still want it to go faster, you could make use of the fact that exp(-d^2/r^2) gets very close to zero for fairly small d: so you can actually crop your kernel to just a tiny rectangle, rather than the huge thing suggested above. A smaller kernel means conv2fft (or especially conv2) will run faster.

Community
  • 1
  • 1
dan-man
  • 2,949
  • 2
  • 25
  • 44
  • Thank you very much for such an outstanding solution. I will get back to you again in this thread if any doubt arises. Thanks again. – nagarwal May 03 '16 at 07:20
  • If you can check that this solution works (or perhaps nearly works), then please mark it as correct. also, it might be nice to edit your previous question, to link to this one. – dan-man May 03 '16 at 10:00
  • Can you please explain the idea behind the step `kernel = bsxfun(@plus, (-s1:s1).^2.', (-s2:s2).^2); `. Moreover, after this step the numbers are so big that MATLAB is producing 0 for the expression `exp(-kernel/ (2*r*2))` . Therefore, I don't think this serves my purpose. – nagarwal May 04 '16 at 05:55
  • @nagarwal - the first line produces (a correctly) large 2d array giving the square distances between a point at the centre and a every other point. the convolution shifts the kernel to every possible position, and multiplies and sums for that position, which is what you need. Regarding "Matlab is producing [almost] 0", I think you will find that this is exactly what you want, and I make a note of this at te end of the answer. – dan-man May 04 '16 at 08:12
  • I made a thorough analysis of your code and realized that your idea is really good. Kernel is actually storing distance of each pixel with all the other pixels in the form of multiple 8152 x 200 arrays and then convolution does the rest of the job of multiplication & addition. I only feel that in the first line, kernel should be defined as `kernel = bsxfun(@plus, (-s1+1:s1-1).^2.', (-s2+1:s2-1).^2); ` because I don't think these corner rows and columns serves any distance. Though I still believe that since we have used the flag `same` during `conv2` therefore our results are gonna unaffected. – nagarwal May 04 '16 at 15:06
  • `@dan-man` Hello, hope you are doing good. Recently, I wished to switch over from using Gaussian to Hyperbolic Tangent (Sigmoid) Kernel for which the expression is defined [here](http://crsouza.com/2010/03/kernel-functions-for-machine-learning-applications/) . According to this expression, I want to redefine my matrix G in the above problem. Can you please throw some light on accomplishing that. Thanks in advance. – nagarwal Aug 03 '16 at 11:23