61

I have some color photos and the illumination is not regular in the photos: one side of the image is brighter than the other side.

I would like to solve this problem by correcting the illumination. I think local contrast will help me but I don't know how :(

Would you please help me with a piece of code or a pipeline ?

Jeru Luke
  • 20,118
  • 13
  • 80
  • 87
user3762718
  • 623
  • 1
  • 6
  • 9

6 Answers6

122

Convert the RGB image to Lab color-space (e.g., any color-space with a luminance channel will work fine), then apply adaptive histogram equalization to the L channel. Finally convert the resulting Lab back to RGB.

What you want is OpenCV's CLAHE (Contrast Limited Adaptive Histogram Equalization) algorithm. However, as far as I know it is not documented. There is an example in python. You can read about CLAHE in Graphics Gems IV, pp474-485

Here is an example of CLAHE in action: enter image description here

And here is the C++ that produced the above image, based on http://answers.opencv.org/question/12024/use-of-clahe/, but extended for color.

#include <opencv2/core.hpp>
#include <vector>       // std::vector
int main(int argc, char** argv)
{
    // READ RGB color image and convert it to Lab
    cv::Mat bgr_image = cv::imread("image.png");
    cv::Mat lab_image;
    cv::cvtColor(bgr_image, lab_image, CV_BGR2Lab);

    // Extract the L channel
    std::vector<cv::Mat> lab_planes(3);
    cv::split(lab_image, lab_planes);  // now we have the L image in lab_planes[0]

    // apply the CLAHE algorithm to the L channel
    cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE();
    clahe->setClipLimit(4);
    cv::Mat dst;
    clahe->apply(lab_planes[0], dst);

    // Merge the the color planes back into an Lab image
    dst.copyTo(lab_planes[0]);
    cv::merge(lab_planes, lab_image);

   // convert back to RGB
   cv::Mat image_clahe;
   cv::cvtColor(lab_image, image_clahe, CV_Lab2BGR);

   // display the results  (you might also want to see lab_planes[0] before and after).
   cv::imshow("image original", bgr_image);
   cv::imshow("image CLAHE", image_clahe);
   cv::waitKey();
}
Bull
  • 11,771
  • 9
  • 42
  • 53
  • 6
    python example has moved. here's the new link: https://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_imgproc/py_histograms/py_histogram_equalization/py_histogram_equalization.html – Waylon Flinn May 03 '15 at 00:21
  • The docs have moved yet again. Here is the current (2023) link for OpenCV 4.7.0: https://docs.opencv.org/4.7.0/d5/daf/tutorial_py_histogram_equalization.html – rnorris Jan 17 '23 at 21:16
35

The answer provided by Bull is the best I have come across so far. I have found it very useful. The following code is for Python users.

Details:

(Note: following code has been updated to include pointers made by rayryeng in the comments)

Code:

import cv2
import numpy as np

img = cv2.imread('flower.jpg', 1)

# converting to LAB color space
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)

Applying CLAHE to the L-channel (lightness) i.e; the first channel in LAB expressed as lab[:,:,0].Feel free to try different values for the clipLimit and tileGridSize:

clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
lab[:,:,0] = clahe.apply(lab[:,:,0])

# Converting image from LAB Color model to BGR color space
enhanced_img = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR)

# Stacking the original image with the enhanced image
result = np.hstack((img, enhanced_img))
cv2.imshow('Result', result)

Result:

The original image (left) and enhanced image (right) have been placed beside each other.

enter image description here

Jeru Luke
  • 20,118
  • 13
  • 80
  • 87
  • 5
    Works. There are a few typos in your code: levels l,a,b are referenced as l, aa, bb and later cl is referenced as cl2. clipLimit allows tuning of the effect, 1.0 is quite subtle, 3 and 4 are more aggressive. – jdelange Oct 01 '16 at 18:18
  • Thanks for spotting it! – Jeru Luke Oct 04 '16 at 04:08
  • 4
    `cv2.split` isn't required as OpenCV in Python uses NumPy arrays. Once you create the CLAHE object, just do `lab[...,0] = clahe.apply(lab[...,0])`. You can also remove `cv2.merge`. – rayryeng Jul 15 '19 at 05:32
8

Based on the great C++ example written by Bull, I was able to write this method for Android.

I have substituted "Core.extractChannel" for "Core.split". This avoids a known memory leak issue.

public void applyCLAHE(Mat srcArry, Mat dstArry) { 
    //Function that applies the CLAHE algorithm to "dstArry".

    if (srcArry.channels() >= 3) {
        // READ RGB color image and convert it to Lab
        Mat channel = new Mat();
        Imgproc.cvtColor(srcArry, dstArry, Imgproc.COLOR_BGR2Lab);

        // Extract the L channel
        Core.extractChannel(dstArry, channel, 0);

        // apply the CLAHE algorithm to the L channel
        CLAHE clahe = Imgproc.createCLAHE();
        clahe.setClipLimit(4);
        clahe.apply(channel, channel);

        // Merge the the color planes back into an Lab image
        Core.insertChannel(channel, dstArry, 0);

        // convert back to RGB
        Imgproc.cvtColor(dstArry, dstArry, Imgproc.COLOR_Lab2BGR);

        // Temporary Mat not reused, so release from memory.
        channel.release();
    }

}

And call it like so:

public Mat onCameraFrame(CvCameraViewFrame inputFrame){
    Mat col = inputFrame.rgba();

    applyCLAHE(col, col);//Apply the CLAHE algorithm to input color image.

    return col;
}
Community
  • 1
  • 1
Logic1
  • 1,806
  • 3
  • 26
  • 43
3

You can also use Adaptive Histogram Equalisation,

from skimage import exposure

img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03)
Little Bobby Tables
  • 4,466
  • 4
  • 29
  • 46
  • The question is to use OpenCV, not scikit-image. – rayryeng Jun 07 '18 at 21:02
  • Looking at http://scikit-image.org/docs/dev/api/skimage.exposure.html#skimage.exposure.equalize_adapthist this does the same thing as the accepted answer, should you be using python and not using opencv. – Bull Jul 08 '18 at 10:50
2

Image illumination Correction with Perceived Brightness Channel

The value channel of HSV is the maximum of B,G,R values. So the perceived brightness can be obtained with the following formula.

enter image description here

I have applied CLAHE to this channel and It looks good.

  1. I calculate the perceived brightness channel of the image

  2. a - > I change the image into HSV Colour space and I replace the V channel from the image by adding the CLAHE applied perceived brightness channel.

  3. b -> I change the image into LAB Colour space. I replace the L channel from the image by adding the CLAHE applied perceived brightness channel.

  4. Then I again convert the image into BGR format.

The python code for my steps

import cv2
import numpy as np

original = cv2.imread("/content/rqq0M.jpg")

def get_perceive_brightness(img):
    float_img = np.float64(img)  # unit8 will make overflow
    b, g, r = cv2.split(float_img)
    float_brightness = np.sqrt(
        (0.241 * (r ** 2)) + (0.691 * (g ** 2)) + (0.068 * (b ** 2)))
    brightness_channel = np.uint8(np.absolute(float_brightness))
    return brightness_channel

perceived_brightness_channel = get_perceive_brightness(original)

clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(8,8))
clahe_applied_perceived_channel = clahe.apply(perceived_brightness_channel) 

def hsv_equalizer(img, new_channel):
  hsv = cv2.cvtColor(original, cv2.COLOR_BGR2HSV)
  h,s,v =  cv2.split(hsv)
  merged_hsv = cv2.merge((h, s, new_channel))
  bgr_img = cv2.cvtColor(merged_hsv, cv2.COLOR_HSV2BGR)
  return bgr_img

def lab_equalizer(img, new_channel):
 lab = cv2.cvtColor(original, cv2.COLOR_BGR2LAB)
  l,a,b =  cv2.split(lab)
  merged_lab = cv2.merge((new_channel,a,b))
  bgr_img = cv2.cvtColor(merged_hsv, cv2.COLOR_LAB2BGR)
  return bgr_img

hsv_equalized_img = hsv_equalizer(original,clahe_applied_perceived_channel)
lab_equalized_img = lab_equalizer(original,clahe_applied_perceived_channel)

Output of the hsv_equalized_img

enter image description here The output of the lab_equlized_img

enter image description here

Sivaram Rasathurai
  • 5,533
  • 3
  • 22
  • 45
-1

You can try the following code:

#include "opencv2/opencv.hpp"
#include <iostream>

using namespace std;
using namespace cv;

int main(int argc, char** argv)
{

    cout<<"Usage: ./executable input_image output_image \n";

    if(argc!=3)
    {
        return 0;
    }


    int filterFactor = 1;
    Mat my_img = imread(argv[1]);
    Mat orig_img = my_img.clone();
    imshow("original",my_img);

    Mat simg;

    cvtColor(my_img, simg, CV_BGR2GRAY);

    long int N = simg.rows*simg.cols;

    int histo_b[256];
    int histo_g[256];
    int histo_r[256];

    for(int i=0; i<256; i++){
        histo_b[i] = 0;
        histo_g[i] = 0;
        histo_r[i] = 0;
    }
    Vec3b intensity;

    for(int i=0; i<simg.rows; i++){
        for(int j=0; j<simg.cols; j++){
            intensity = my_img.at<Vec3b>(i,j);

            histo_b[intensity.val[0]] = histo_b[intensity.val[0]] + 1;
            histo_g[intensity.val[1]] = histo_g[intensity.val[1]] + 1;
            histo_r[intensity.val[2]] = histo_r[intensity.val[2]] + 1;
        }
    }

    for(int i = 1; i<256; i++){
        histo_b[i] = histo_b[i] + filterFactor * histo_b[i-1];
        histo_g[i] = histo_g[i] + filterFactor * histo_g[i-1];
        histo_r[i] = histo_r[i] + filterFactor * histo_r[i-1];
    }

    int vmin_b=0;
    int vmin_g=0;
    int vmin_r=0;
    int s1 = 3;
    int s2 = 3;

    while(histo_b[vmin_b+1] <= N*s1/100){
        vmin_b = vmin_b +1;
    }
    while(histo_g[vmin_g+1] <= N*s1/100){
        vmin_g = vmin_g +1;
    }
    while(histo_r[vmin_r+1] <= N*s1/100){
        vmin_r = vmin_r +1;
    }

    int vmax_b = 255-1;
    int vmax_g = 255-1;
    int vmax_r = 255-1;

    while(histo_b[vmax_b-1]>(N-((N/100)*s2)))
    {   
        vmax_b = vmax_b-1;
    }
    if(vmax_b < 255-1){
        vmax_b = vmax_b+1;
    }
    while(histo_g[vmax_g-1]>(N-((N/100)*s2)))
    {   
        vmax_g = vmax_g-1;
    }
    if(vmax_g < 255-1){
        vmax_g = vmax_g+1;
    }
    while(histo_r[vmax_r-1]>(N-((N/100)*s2)))
    {   
        vmax_r = vmax_r-1;
    }
    if(vmax_r < 255-1){
        vmax_r = vmax_r+1;
    }

    for(int i=0; i<simg.rows; i++)
    {
        for(int j=0; j<simg.cols; j++)
        {

            intensity = my_img.at<Vec3b>(i,j);

            if(intensity.val[0]<vmin_b){
                intensity.val[0] = vmin_b;
            }
            if(intensity.val[0]>vmax_b){
                intensity.val[0]=vmax_b;
            }


            if(intensity.val[1]<vmin_g){
                intensity.val[1] = vmin_g;
            }
            if(intensity.val[1]>vmax_g){
                intensity.val[1]=vmax_g;
            }


            if(intensity.val[2]<vmin_r){
                intensity.val[2] = vmin_r;
            }
            if(intensity.val[2]>vmax_r){
                intensity.val[2]=vmax_r;
            }

            my_img.at<Vec3b>(i,j) = intensity;
        }
    }

    for(int i=0; i<simg.rows; i++){
        for(int j=0; j<simg.cols; j++){

            intensity = my_img.at<Vec3b>(i,j);
            intensity.val[0] = (intensity.val[0] - vmin_b)*255/(vmax_b-vmin_b);
            intensity.val[1] = (intensity.val[1] - vmin_g)*255/(vmax_g-vmin_g);
            intensity.val[2] = (intensity.val[2] - vmin_r)*255/(vmax_r-vmin_r);
            my_img.at<Vec3b>(i,j) = intensity;
        }
    }   


    // sharpen image using "unsharp mask" algorithm
    Mat blurred; double sigma = 1, threshold = 5, amount = 1;
    GaussianBlur(my_img, blurred, Size(), sigma, sigma);
    Mat lowContrastMask = abs(my_img - blurred) < threshold;
    Mat sharpened = my_img*(1+amount) + blurred*(-amount);
    my_img.copyTo(sharpened, lowContrastMask);    

    imshow("New Image",sharpened);
    waitKey(0);

    Mat comp_img;
    hconcat(orig_img, sharpened, comp_img);
    imwrite(argv[2], comp_img);
}

Check here for more details.

UserVA
  • 1