As Martin states, to apply a distortion to an image, rather than just a color correction, you need to somehow displace pixels within that image. You generally start with the output image and figure out which input pixel location to grab from to fill in each location in the output.
For example, to generate the pinch distortion I show in this answer, I use an OpenGL ES fragment shader that looks like the following:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp vec2 center;
uniform highp float radius;
uniform highp float scale;
void main()
{
highp vec2 textureCoordinateToUse = textureCoordinate;
highp float dist = distance(center, textureCoordinate);
textureCoordinateToUse -= center;
if (dist < radius)
{
highp float percent = 1.0 + ((0.5 - dist) / 0.5) * scale;
textureCoordinateToUse = textureCoordinateToUse * percent;
}
textureCoordinateToUse += center;
gl_FragColor = texture2D(inputImageTexture, textureCoordinateToUse );
}
This GLSL code is applied to every pixel in the output image. What it does is calculate the distance from the center of the region being pinched to the current pixel coordinate. It then takes that input distance and scales that based on the input scale
parameter. This new scaled distance is used to displace the coordinate in the input image where the output color will be read from.
Sampling a color from the input image at a displaced coordinate for each output pixel is what produces a distorted version of the input image. As you can see in my linked answer, slightly different functions for calculating this displacement can lead to very different distortions.