59

I want to blur my image using the native Gaussian blur formula. I read the Wikipedia article, but I am not sure how to implement this.

How do I use the formula to decide weights?

I do not want to use any built in functions like what MATLAB has

hippietrail
  • 15,848
  • 18
  • 99
  • 158
Moeb
  • 10,527
  • 31
  • 84
  • 110
  • 2
    basically you need to implement a convolution operator equivalent to the **conv2()** function in MATLAB. However since 2D Gaussian can be separated into two 1D Gaussian, all you need is an implementation of the convolution function on 1D, coupled with the correct kernel matrix. – Amro Nov 08 '09 at 12:13

5 Answers5

137

Writing a naive gaussian blur is actually pretty easy. It is done in exactly the same way as any other convolution filter. The only difference between a box and a gaussian filter is the matrix you use.

Imagine you have an image defined as follows:

 0  1  2  3  4  5  6  7  8  9
10 11 12 13 14 15 16 17 18 19
20 21 22 23 24 25 26 27 28 29
30 31 32 33 34 35 36 37 38 39
40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59
60 61 62 63 64 65 66 67 68 69
70 71 72 73 74 75 76 77 78 79
80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99

A 3x3 box filter matrix is defined as follows:

0.111 0.111 0.111
0.111 0.111 0.111
0.111 0.111 0.111

To apply the gaussian blur you would do the following:

For pixel 11 you would need to load pixels 0, 1, 2, 10, 11, 12, 20, 21, 22.

you would then multiply pixel 0 by the upper left portion of the 3x3 blur filter. Pixel 1 by the top middle, pixel 2, pixel 3 by top right, pixel 10 by middle left and so on.

Then add them altogether and write the result to pixel 11. As you can see Pixel 11 is now the average of itself and the surrounding pixels.

Edge cases do get a bit more complex. What values do you use for the values of the edge of the texture? One way can be to wrap round to the other side. This looks good for an image that is later tiled. Another way is to push the pixel into the surrounding places.

So for upper left you might place the samples as follows:

 0  0  1
 0  0  1
10 10 11

I hope you can see how this can easily be extended to large filter kernels (ie 5x5 or 9x9 etc).

The difference between a gaussian filter and a box filter is the numbers that go in the matrix. A gaussian filter uses a gaussian distribution across a row and column.

e.g for a filter defined arbitrarily as (ie this isn't a gaussian, but probably not far off)

0.1 0.8 0.1

the first column would be the same but multiplied into the first item of the row above.

0.01 0.8 0.1
0.08 
0.01 

The second column would be the same but the values would be multiplied by the 0.8 in the row above (and so on).

0.01 0.08 0.01
0.08 0.64 0.08
0.01 0.08 0.01

The result of adding all of the above together should equal 1. The difference between the above filter and the original box filter would be that the end pixel written would have a much heavier weighting towards the central pixel (ie the one that is in that position already). The blur occurs because the surrounding pixels do blur into that pixel, though not as much. Using this sort of filter you get a blur but one that doesn't destroy as much of the high frequency (ie rapid changing of colour from pixel to pixel) information.

These sort of filters can do lots of interesting things. You can do an edge detect using this sort of filter by subtracting the surrounding pixels from the current pixel. This will leave only the really big changes in colour (high frequencies) behind.

Edit: A 5x5 filter kernel is define exactly as above.

e.g if your row is 0.1 0.2 0.4 0.2 0.1 then if you multiply each value in their by the first item to form a column and then multiply each by the second item to form the second column and so on you'll end up with a filter of

0.01 0.02 0.04 0.02 0.01
0.02 0.04 0.08 0.04 0.02
0.04 0.08 0.16 0.08 0.04
0.02 0.04 0.08 0.04 0.02
0.01 0.02 0.04 0.02 0.01

taking some arbitrary positions you can see that position 0, 0 is simple 0.1 * 0.1. Position 0, 2 is 0.1 * 0.4, position 2, 2 is 0.4 * 0.4 and position 1, 2 is 0.2 * 0.4.

I hope that gives you a good enough explanation.

Goz
  • 61,365
  • 24
  • 124
  • 204
  • @Goz Suppose I want to use a 5x5 filter kernel, how do I calculate the weights that should go in the filter? – Moeb Nov 08 '09 at 12:18
  • Could have mentioned the keyword: **neural networks** respectively **artificial neural networks**. – Bitterblue Mar 11 '15 at 12:45
  • @bitterblue: Can you explain why neural networks have anything to do with image space filtering? – Goz Mar 11 '15 at 14:29
  • 'cause they're connected? Same technique just without the threshold thing. – Bitterblue Mar 11 '15 at 15:16
  • 1
    @Bitterblue: Really. How? Neural networks are a machine learning algorithm. This is about 2D image space filtering. – Goz Mar 11 '15 at 16:59
  • You described the method yourself in the A. You used 9 nodes, gave them weights, performed the same mathematical calculations with the input values (pixels) and got an output value. If you're going for detection (e.g. line detection) you can teach your ANNs too. – Bitterblue Mar 12 '15 at 12:59
  • 1
    @Bitterblue: Fair point, you can use filters to extract features for machine learning. Its still wholly irrelevant to this question, however ... – Goz Mar 23 '15 at 09:59
  • "Using this sort of filter you get a blur but one that doesn't destroy as much of the high frequency (ie rapid changing of colour from pixel to pixel) information." This is patently untrue. The Gaussian filter is much better at killing high frequencies than the "box" filter (mean with uniform weights). It provides the best compromise between compactness in the spatial/time domain and compactness in the frequency domain. Compactness in the frequency domain implies it is a very good low-pass filter. – Cris Luengo Jun 19 '18 at 19:12
11

Here's the pseudo-code for the code I used in C# to calculate the kernel. I do not dare say that I treat the end-conditions correctly, though:

double[] kernel = new double[radius * 2 + 1];
double twoRadiusSquaredRecip = 1.0 / (2.0 * radius * radius);
double sqrtTwoPiTimesRadiusRecip = 1.0 / (sqrt(2.0 * Math.PI) * radius);
double radiusModifier = 1.0;

int r = -radius;
for (int i = 0; i < kernel.Length; i++)
{
    double x = r * radiusModifier;
    x *= x;
    kernel[i] = sqrtTwoPiTimesRadiusRecip * Exp(-x * twoRadiusSquaredRecip);
    r++;
}

double div = Sum(kernel);
for (int i = 0; i < kernel.Length; i++)
{
    kernel[i] /= div;
}

Hope this helps.

Simson
  • 3,373
  • 2
  • 24
  • 38
Cecil Has a Name
  • 4,962
  • 1
  • 29
  • 31
  • 3
    I believe, that this line: `sqrtTwoPiTimesRadiusRecip * Exp(-x * sqrtTwoPiTimesRadiusRecip);` must be: `sqrtTwoPiTimesRadiusRecip * Exp(-x * twoRadiusSquaredRecip);` – ashagi Jun 30 '11 at 06:07
  • 4
    The multiplication by `sqrtTwoPiTimesRadiusRecip ` is not needed at all because you normalize the kernel anyway. – Emily L. Oct 21 '14 at 17:52
  • @Simson, I'm not sure what your edit did, besides take credit away from me for formally suggesting it after running into the same issue with my coworker's code – Charlie G Apr 10 '20 at 01:38
  • @Charlie my edit gave you an instant +2 in reputation, as when I click "Improve Edit" your edit did not have to wait for a second reviewer to be confirmed. When there is something more to be done on a post in the review queue this is the way to fix it. Check the edit history, no credit is stolen from you. – Simson Apr 10 '20 at 03:17
  • 1
    @Simson I get it, but I was having a power trip about having my name/picture be on the question page, not just in the history :( hah oh well thanks for explaining – Charlie G Apr 10 '20 at 03:59
9

To use the filter kernel discussed in the Wikipedia article you need to implement (discrete) convolution. The idea is that you have a small matrix of values (the kernel), you move this kernel from pixel to pixel in the image (i.e. so that the center of the matrix is on the pixel), multiply the matrix elements with the overlapped image elements, sum all the values in the result and replace the old pixel value with this sum.

Gaussian blur can be separated into two 1D convolutions (one vertical and one horizontal) instead of a 2D convolution, which also speeds things up a bit.

Markus Johnsson
  • 3,949
  • 23
  • 30
3

I am not clear whether you want to restrict this to certain technologies, but if not SVG (ScalableVectorGraphics) has an implementation of Gaussian Blur. I believe it applies to all primitives including pixels. SVG has the advantage of being an Open standard and widely implemented.

peter.murray.rust
  • 37,407
  • 44
  • 153
  • 217
  • I want to use the basic defination of a gaussian filter, and no built-in implementations. I want to implement it on my own. – Moeb Nov 08 '09 at 11:52
1

Well, Gaussian Kernel is a separable kernel.
Hence all you need is a function which supports Separable 2D Convolution like - ImageConvolutionSeparableKernel().

Once you have it, all needed is a wrapper to generate 1D Gaussian Kernel and send it to the function as done in ImageConvolutionGaussianKernel().

The code is a straight forward C implementation of 2D Image Convolution accelerated by SIMD (SSE) and Multi Threading (OpenMP).

The whole project is given by - Image Convolution - GitHub.

Royi
  • 4,640
  • 6
  • 46
  • 64