-5

So I'm staring a project about image processing with C++.

The thing is that everything I find online about this matter (blurring an image in C++) comes either with CUDA or with OpenCV.

Is there a way to blur an image with C++ only? (for starters)

If yes, can somebody please share the code or explain?

Thanks!

  • 1
    Yes. Mean filter. But first you need a structure to hold your image. Then load it. Then save it. In between you apply the filter. – knivil Feb 21 '17 at 09:44
  • Possible duplicate of [How do I gaussian blur an image without using any in-built gaussian functions?](http://stackoverflow.com/questions/1696113/how-do-i-gaussian-blur-an-image-without-using-any-in-built-gaussian-functions) – davidsheldon Feb 21 '17 at 09:45
  • I'd recommend **CImg**, it is C++, light weight, simple to install (just one header file and no libraries) and easy to use... http://cimg.eu – Mark Setchell Feb 21 '17 at 09:47
  • There's no `blur_image()` function in c++, but all the tools are there. The image is just data, you need to read it in, possibly decode it (if, it's a jpeg say), fiddle with it, possibly encode it, and save it again. – Colin Feb 21 '17 at 09:51

2 Answers2

2

Firstly you need the image in memory.

Then you need a second buffer to use as a workspace.

Then you need a filter. A common filter would be

          1   4  1
          4 -20  4
          1   4  1

For each pixel, we apply the filter. So we're setting the image to a weighted average of the pixels around it, then subtracting to avoid the overall image going lighter or darker.

Applying a small filter is very simple.

          for(y=0;y<height;y++)
            for(x=0;x<width;x++)
            {
               total = image[(y+1)*width+x+1];
               for(fy=0; fy < 3; fy++)
                 for(fx = 0; fx < 3; fx++)
                   total += image[(y+fy)*width+x+fx] * filter[fy*3+x];
              output[(y+1)*width+x+1] = clamp(total, 0, 255);

            }

You need to special case the edges, which is just fiddly but doesn't add any theoretical complexity.

When we use faster algorithms that the naive one it becomes important to set up edges correctly. You then do the calculations in the frequency domain and it's a lot faster with a big filter.

Codor
  • 17,447
  • 9
  • 29
  • 56
Malcolm McLean
  • 6,258
  • 1
  • 17
  • 18
0

If you would like to implement the blurring on your own, you have to somehow store the image in memory. If you have a black and white image, an

unsigned char[width*height]

might be sufficient to store the image; if it is a colour image, perhaps you will have the same array, but three or four times the size (one for each colour channel and one for the so-called alpha-value which describes the opacity).

For the black and white case, you would have to sum up the neighbours of each pixel and calculate its average; this approach transfers to colour images by applying the operation to each colour channel.

The operation described above is a special case of the so-called kernel filter, which can also be used to implement different operations.

Codor
  • 17,447
  • 9
  • 29
  • 56