1

I have an image (10000x10000 pixels) and I have a kernel (5x5 pixels). I want to find the place(s) in the image that best matches the kernel.

I vaguely remember from my studies that I need to compute a correlation coefficient for each pixel in the large image with respect to the kernel. But having something like (10000 - 4) * (10000 - 4) pixels to go through, I expect to get a huge performance hit doing this in python.

Having only a very brief knowledge on the subject I was hoping to find something in either numpy or scipy that would do this relatively fast, but I haven't been able to find anything.

Does either numpy or scipy contain a method for doing this?

Michele d'Amico
  • 22,111
  • 8
  • 69
  • 76
Chau
  • 5,540
  • 9
  • 65
  • 95
  • Possible duplicate of this: http://stackoverflow.com/questions/1100100/fft-based-2d-convolution-and-correlation-in-python – FuzzyDuck Jan 26 '15 at 13:24

1 Answers1

1

This is usually referred to as template matching in image processing and most image processing packages will have something for it. If you can use scikit-image then you probably want match_template. Of course, OpenCV can do template matching too.

If you need to stick with pure scipy, it's easy enough to implement yourself: just find the maximum pixel (argmax) of a normalized cross-correlation (correlate2d).

tom10
  • 67,082
  • 10
  • 127
  • 137