2

It is known that when using first and especially second order derivative we should first smooth the image so in the case of Laplacian of Gaussian first to convolve with the Gaussian mask and the with the Laplacian mask. But on the other hand both of them are linear operations so should we get the same result if we first apply Laplacian and then Gaussian?

Cris Luengo
  • 55,762
  • 10
  • 62
  • 120
gbox
  • 809
  • 1
  • 8
  • 22

1 Answers1

3

Yes, the two operations are convolutions, linear operations, and therefore can be applied in any order to yield the exact same result. If the results are not exactly the same, it is due to rounding errors.

You can also combine both kernels and apply them as a single convolution. But it actually is computationally cheaper to compute the Gaussian and the 3x3 Laplacian separately, because the Gaussian can be computed by applying two 1D filters (i.e. it is separable), which saves a lot of computation.

For details about the different ways to compute the Laplace of Gaussian, see this answer.

Cris Luengo
  • 55,762
  • 10
  • 62
  • 120
  • it actually is computationally cheaper to compute the Gaussian and the 3x3 Laplacian separately ... Are you sure? This would result in two fast passes + one slow pass. I was under the impression that since I already had to do the slow pass for the Laplacian that combining them into an LoG and still doing the slow pass I had to do anyway, would be fastest. – hippietrail Sep 15 '19 at 07:41
  • 1
    @hippietrail: You can compute the LoG as the sum of two filters, second order derivatives of Gaussian. Each of these can be computed by 2 1D convolutions. So, you do 4 1D convolutions. Say the kernel is 15x15. That is 4x15=60 multiplications and additions (plus 1 for the sum of the two results). Separating them as a Gaussian + a 3x3 Laplacian approximation, you do 2x15+9=39 multiplications and additions. Doing it brute-force as a single convolution you do 15x15=225 multiplications and additions. – Cris Luengo Sep 15 '19 at 13:28
  • Yes I've been reading [this page of yours](https://www.crisluengo.net/archives/1099). It's very interesting. I'm not sure how I would go about doing it though, or if the case of a 5x5 would benefit. – hippietrail Sep 15 '19 at 13:54
  • @hippietrail: For a small image that fits in cache, the number of passes is not so important, reduce the number of operations. For larger images this is no longer true, and it is worthwhile to do some more computing to reduce the number of memory accesses. The best thing to do is implement different methods, and time them. – Cris Luengo Sep 15 '19 at 14:02
  • 1
    That said, 5x5 is very small for a LoG filter. If sigma is smaller than about 0.8 you will be undersampling the kernel (aliasing!). However, even at 0.8 sigma, sampling the second derivative of the Gaussian with only 5 samples will cause clipping that reduces the precision of the computed gradient. It depends on what your goal is for the filter, whether this clipping is OK. – Cris Luengo Sep 15 '19 at 14:02
  • My use-case was just to take variance of the LoG for a set of photos taken together to help choose the one with least blur. And to learn new stuff as I go (-: Also interested what you think of [my question on the DSP site.](https://dsp.stackexchange.com/questions/60611/why-do-convolution-kernels-such-as-gaussian-laplacian-log-almost-always-seem-t) – hippietrail Sep 15 '19 at 14:15
  • 1
    @hippietrail: That is not a high-precision application, you can afford to clip a bit, but don’t clip too much. 1+2*ceil(2.5*sigma) or so samples should be good. – Cris Luengo Sep 15 '19 at 14:50