3

Can anyone guide me in the best way of developing a filter algorithm for video processing?

Say for example I wanted to apply a fisheye lens filter onto an image, how would I process the pixels so that it would mimic this effect?

If I wanted to make the picture look more red, then I would deduct values from the blue and green components at each pixel, leaving behind only the red component.

This kind of distortion is more than just color processing, so I'd like to know how to manipulate the pixels in the correct way to mimic a fisheye lens filter, or say a pinch filter, and so forth.

EDIT:

Filter algorithm for VIDEO PROCESSING*

Petros Koutsolampros
  • 2,790
  • 1
  • 14
  • 20
Pavan
  • 17,840
  • 8
  • 59
  • 100

2 Answers2

5

As Martin states, to apply a distortion to an image, rather than just a color correction, you need to somehow displace pixels within that image. You generally start with the output image and figure out which input pixel location to grab from to fill in each location in the output.

For example, to generate the pinch distortion I show in this answer, I use an OpenGL ES fragment shader that looks like the following:

 varying highp vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;

 uniform highp vec2 center;
 uniform highp float radius;
 uniform highp float scale;

 void main()
 {
     highp vec2 textureCoordinateToUse = textureCoordinate;
     highp float dist = distance(center, textureCoordinate);
     textureCoordinateToUse -= center;
     if (dist < radius)
     {
         highp float percent = 1.0 + ((0.5 - dist) / 0.5) * scale;

         textureCoordinateToUse = textureCoordinateToUse * percent;
     }
     textureCoordinateToUse += center;

     gl_FragColor = texture2D(inputImageTexture, textureCoordinateToUse );

 }

This GLSL code is applied to every pixel in the output image. What it does is calculate the distance from the center of the region being pinched to the current pixel coordinate. It then takes that input distance and scales that based on the input scale parameter. This new scaled distance is used to displace the coordinate in the input image where the output color will be read from.

Sampling a color from the input image at a displaced coordinate for each output pixel is what produces a distorted version of the input image. As you can see in my linked answer, slightly different functions for calculating this displacement can lead to very different distortions.

Community
  • 1
  • 1
Brad Larson
  • 170,088
  • 45
  • 397
  • 571
  • i appreciate your post. I've checked your answer in the other post, and the images look great, i want to be able to apply say for example the pinch filter to a video. so the user can shoot a video, and then apply a filter on it afterwards. after the processing is done, say if it takes 1 minute to process they can then save it. thats what im trying to do. I dont know whether grabbing each individual frame and then applying a filter on it as if its an image is the best way forward... any guidance? – Pavan May 18 '12 at 04:01
  • @Pavan - That will depend on the platform you're talking about. Given that you've asked several questions in the past about iOS, if you're targeting those devices, you can use my open source framework: https://github.com/BradLarson/GPUImage to do this on live video. There's no need to record the video, then process it, as the filters I apply run fast enough to distort video as it is captured by the camera. You can record and then process the recorded video, as well, but I imagine it would be preferable to display and record video frames as they come in. – Brad Larson May 18 '12 at 15:40
  • hey brad, im using the openframeworks platform for the iOS devices, so programming in c++, ive done some basic color effects on the input coming from the camera. It grabs the pixels on the of the frame and then with a for loop iterate through each pixel and then add a color effect for example shift the reds, and so on.. or do a low pass filter on one of the colours. Im wondering how i can integrate your library with the current setup i have? is there anyway i can pass in my pixel array as a texture or whatever into your class that adds the filter and return something back – Pavan May 19 '12 at 02:20
  • cont: so i can display it on the screen? atm after ive done the color effect on the pixels i simply output back to the screen with the available methods. But i would like to have your library in between those steps and use your existing filtes. I can see that you are using shaders, so anyway i can pass something into your library or however is convenient just so i can use your filters onto my current setup? – Pavan May 19 '12 at 02:20
  • @Pavan - I just added a raw data input class, which now lets you take in bytes, filter them, and then extract bytes at the end. See the RawDataTest example to see how this works. However, you'll lose a tremendous amount of performance by reading data from the camera to the CPU, then uploading it to OpenGL ES via my framework, only to extract it again to the CPU for display. If you use the camera input from my framework, process using its filters, and then output via its view, you can grab, filter, and display 640x480 frames in 2.5 ms on an iPhone 4. – Brad Larson May 22 '12 at 03:43
  • hey brad, ive started using your framework. it really is something, as its nice and quick compared to the way i was doing it which was via openframeworks. looks like shaders and open gl is the way to go. Is there a place where i can contact you for any questions i have during my development using your framework? – Pavan May 24 '12 at 16:51
  • cont: For example i have a question right now: in your multiviewfilter project i am trying to stop the processing of those four specific views view1,2,3 and 4 so that when the coordinates of those views are out of the screen i want to stop the processing of those specific filters so that i can start processing another set of 4 filters without having the previous 4 filters run in the background off the screen. – Pavan May 24 '12 at 16:53
  • cont: they should be ended and when the filters 5,6,7,8 are off the screen and view filters 1,2,3 and 4 are brough back into the scren again they can begin processing again as normal. which methods do i use? i tried doing [view4 endProcessing]; based on a finger tap but that doesnt stop the processing of view4 filter.. it still runs.. everything else works fine, i just wanted to know the methods to use that will allow me to stop and start processing any view filter i like. any guidance? – Pavan May 24 '12 at 16:54
3

You apply an image warp. Basically for each point in the transformed output image you have a mathematical formula that calculates where that point would have come from in the original image, you then just copy the pixel at those corrdinates - opencv has functions to do this.

Normally of course you are trying to remove optical effects like fish-eye, but the principle is the same.

ps. It's a little confusing to think of starting with the result and working back to the source but you do it this way because many points in the source image might all go to the same point in the result and ou want an even grid of resulting pixels.

Martin Beckett
  • 94,801
  • 28
  • 188
  • 263