31

I'm developing a routine for automatic enhancement of scanned 35 mm slides. I'm looking for a good algorithm for increasing contrast and removing color cast. The algorithm will have to be completely automatic, since there will be thousands of images to process. These are a couple of sample images straight from the scanner, only cropped and downsized for web:

A_CroppedB_Cropped

I'm using the AForge.NET library and have tried both the HistogramEqualization and ContrastStretch filters. HistogramEqualization is good for maximizing local contrast but does not produce pleasing results overall. ContrastStretch is way better, but since it stretches the histogram of each color band individually, it sometimes produces a strong color cast:

A_Stretched

To reduce the color shift, I created a UniformContrastStretch filter myself using the ImageStatistics and LevelsLinear classes. This uses the same range for all color bands, preserving the colors at the expense of less contrast.

    ImageStatistics stats = new ImageStatistics(image);
    int min = Math.Min(Math.Min(stats.Red.Min, stats.Green.Min), stats.Blue.Min);
    int max = Math.Max(Math.Max(stats.Red.Max, stats.Green.Max), stats.Blue.Max);
    LevelsLinear levelsLinear = new LevelsLinear();
    levelsLinear.Input = new IntRange(min, max);
    Bitmap stretched = levelsLinear.Apply(image);

A_UniformStretched

The image is still quite blue though, so I created a ColorCorrection filter that first calculates the mean luminance of the image. A gamma correction value is then calculated for each color channel, so that the mean value of each color channel will equal the mean luminance. The uniform contrast stretched image has mean values R=70 G=64 B=93, the mean luminance being (70 + 64 + 93) / 3 = 76. The gamma values are calculated to R=1.09 G=1.18 B=0.80 and the resulting, very neutral, image has mean values of R=76 G=76 B=76 as expected:

A_UniformStretchedCorrected

Now, getting to the real problem... I suppose correcting the mean color of the image to grey is a bit too drastic and will make some images quite dull in appearance, like the second sample (first image is uniform stretched, next is the same image color corrected):

B_UniformStretched B_UniformStretchedCorrected

One way to perform color correction manually in a photo editing program is to sample the color of a known neutral color (white/grey/black) and adjust the rest of the image to that. But since this routine has to be completely automatic, that is not an option.

I guess I could add a strength setting to my ColorCorrection filter, so that a strength of 0.5 will move the mean values half the distance to the mean luminance. But on the other hand, some images might do best without any color correction at all.

Any ideas for a better algoritm? Or some method to detect whether an image has a color cast or just has lots of some color, like the second sample?

Anlo
  • 3,228
  • 4
  • 26
  • 33
  • 4
    If the brightest parts of your image (without clipping) are neutral color, you probably don't need to apply color correction. That would catch your ship image. – Mark Ransom Jan 07 '13 at 21:14
  • @MarkRansom, that's a great idea. I will definately exclude colors that is clipped in all channels (255, 255, 255) but do you think it would be a good idea to also exlude colors that are only clipped in one channel, like (244, 249, 255)? – Anlo Jan 07 '13 at 21:54
  • 1
    A color that is only clipped in one channel would be good evidence of a color cast if the other channels are in the proper range. Think of a picture of a rose, it could easily clip on the red channel but the others will still be very dark in comparison. The other channels need to be high enough to be considered near white, but low enough to indicate a cast. – Mark Ransom Jan 07 '13 at 22:07
  • Yes, of course, that makes sense. Makes the coding easier as well. :-) – Anlo Jan 07 '13 at 22:14
  • 5
    I would start with "Color balancing of digital photos using simple image statistics" by Gasparini and Schettini (available at http://www.sciencedirect.com/science/article/pii/S0031320304000068 if you have access, otherwise there is a lower-quality version that can be found with a search engine), not necessarily new but more robust. As any paper, it also quickly describes related research and the issues regarding the problem. – mmgp Jan 08 '13 at 13:47
  • Reading the paper on color balancing, looks like a good solution so far. – Anlo Jan 10 '13 at 09:10
  • Hmm, the method described in the paper uses image annotation for removing regions of sky, skin, vegetation and water from the color cast detector. Unfortunately, the paper on image annotation doesn't include any training data. I have asked a new question regarding this: http://stackoverflow.com/questions/14281026/implementation-of-svm-image-annotation-for-color-cast-removal – Anlo Jan 11 '13 at 15:16
  • You could find the brightest and darkest pixels, and deduce which color the image is skewed towards. E.g., if the brightest pixel is F0:E0:FF, you know that this differs from true white by -0F:1F:00. This assumes that the brightest and darkest pixels are meant to be pure white and black, of course. – David R Tribble Jan 11 '13 at 20:29

5 Answers5

2
  • translated to hsv
  • V-layer is corrected by scaling values from (min,max) range to (0,255) range
  • assembled back to rgb
  • correcting R,G,B layers of result by same idea as the V-layer on second step

there is no aforge.net code, because it processed by php prototype code, but afaik there is no any problem to do such with aforge.net. results are:

enter image description here enter image description here

orrollo
  • 311
  • 1
  • 4
2

Convert your RGB to HSL using this:

    System.Drawing.Color color = System.Drawing.Color.FromArgb(red, green, blue);
    float hue = color.GetHue();
    float saturation = color.GetSaturation();
    float lightness = color.GetBrightness();

Adjust your Saturation and Lightness accordingly

Convert HSL back to RGB by:

/// <summary>
/// Convert HSV to RGB
/// h is from 0-360
/// s,v values are 0-1
/// r,g,b values are 0-255
/// Based upon http://ilab.usc.edu/wiki/index.php/HSV_And_H2SV_Color_Space#HSV_Transformation_C_.2F_C.2B.2B_Code_2
/// </summary>
void HsvToRgb(double h, double S, double V, out int r, out int g, out int b)
{
  // ######################################################################
  // T. Nathan Mundhenk
  // mundhenk@usc.edu
  // C/C++ Macro HSV to RGB

  double H = h;
  while (H < 0) { H += 360; };
  while (H >= 360) { H -= 360; };
  double R, G, B;
  if (V <= 0)
    { R = G = B = 0; }
  else if (S <= 0)
  {
    R = G = B = V;
  }
  else
  {
    double hf = H / 60.0;
    int i = (int)Math.Floor(hf);
    double f = hf - i;
    double pv = V * (1 - S);
    double qv = V * (1 - S * f);
    double tv = V * (1 - S * (1 - f));
    switch (i)
    {

      // Red is the dominant color

      case 0:
        R = V;
        G = tv;
        B = pv;
        break;

      // Green is the dominant color

      case 1:
        R = qv;
        G = V;
        B = pv;
        break;
      case 2:
        R = pv;
        G = V;
        B = tv;
        break;

      // Blue is the dominant color

      case 3:
        R = pv;
        G = qv;
        B = V;
        break;
      case 4:
        R = tv;
        G = pv;
        B = V;
        break;

      // Red is the dominant color

      case 5:
        R = V;
        G = pv;
        B = qv;
        break;

      // Just in case we overshoot on our math by a little, we put these here. Since its a switch it won't slow us down at all to put these here.

      case 6:
        R = V;
        G = tv;
        B = pv;
        break;
      case -1:
        R = V;
        G = pv;
        B = qv;
        break;

      // The color is not defined, we should throw an error.

      default:
        //LFATAL("i Value error in Pixel conversion, Value is %d", i);
        R = G = B = V; // Just pretend its black/white
        break;
    }
  }
  r = Clamp((int)(R * 255.0));
  g = Clamp((int)(G * 255.0));
  b = Clamp((int)(B * 255.0));
}

/// <summary>
/// Clamp a value to 0-255
/// </summary>
int Clamp(int i)
{
  if (i < 0) return 0;
  if (i > 255) return 255;
  return i;
}

Original Code:

Community
  • 1
  • 1
Shreyas Kapur
  • 669
  • 4
  • 15
2

You can try auto brightness and contrast from this link : http://answers.opencv.org/question/75510/how-to-make-auto-adjustmentsbrightness-and-contrast-for-image-android-opencv-image-correction/

void Utils::BrightnessAndContrastAuto(const cv::Mat &src, cv::Mat &dst, float clipHistPercent)
{

    CV_Assert(clipHistPercent >= 0);
    CV_Assert((src.type() == CV_8UC1) || (src.type() == CV_8UC3) || (src.type() == CV_8UC4));

    int histSize = 256;
    float alpha, beta;
    double minGray = 0, maxGray = 0;

    //to calculate grayscale histogram
    cv::Mat gray;
    if (src.type() == CV_8UC1) gray = src;
    else if (src.type() == CV_8UC3) cvtColor(src, gray, CV_BGR2GRAY);
    else if (src.type() == CV_8UC4) cvtColor(src, gray, CV_BGRA2GRAY);
    if (clipHistPercent == 0)
    {
        // keep full available range
        cv::minMaxLoc(gray, &minGray, &maxGray);
    }
    else
    {
        cv::Mat hist; //the grayscale histogram

        float range[] = { 0, 256 };
        const float* histRange = { range };
        bool uniform = true;
        bool accumulate = false;
        calcHist(&gray, 1, 0, cv::Mat(), hist, 1, &histSize, &histRange, uniform, accumulate);

        // calculate cumulative distribution from the histogram
        std::vector<float> accumulator(histSize);
        accumulator[0] = hist.at<float>(0);
        for (int i = 1; i < histSize; i++)
        {
            accumulator[i] = accumulator[i - 1] + hist.at<float>(i);
        }

        // locate points that cuts at required value
        float max = accumulator.back();
        clipHistPercent *= (max / 100.0); //make percent as absolute
        clipHistPercent /= 2.0; // left and right wings
        // locate left cut
        minGray = 0;
        while (accumulator[minGray] < clipHistPercent)
            minGray++;

        // locate right cut
        maxGray = histSize - 1;
        while (accumulator[maxGray] >= (max - clipHistPercent))
            maxGray--;
    }

    // current range
    float inputRange = maxGray - minGray;

    alpha = (histSize - 1) / inputRange;   // alpha expands current range to histsize range
    beta = -minGray * alpha;             // beta shifts current range so that minGray will go to 0

    // Apply brightness and contrast normalization
    // convertTo operates with saurate_cast
    src.convertTo(dst, -1, alpha, beta);

    // restore alpha channel from source 
    if (dst.type() == CV_8UC4)
    {
        int from_to[] = { 3, 3 };
        cv::mixChannels(&src, 4, &dst, 1, from_to, 1);
    }
    return;
}

Or apply Auto Color Balance from this link : http://www.morethantechnical.com/2015/01/14/simplest-color-balance-with-opencv-wcode/

void Utils::SimplestCB(Mat& in, Mat& out, float percent) {
    assert(in.channels() == 3);
    assert(percent > 0 && percent < 100);

    float half_percent = percent / 200.0f;

    vector<Mat> tmpsplit; split(in, tmpsplit);
    for (int i = 0; i < 3; i++) {
        //find the low and high precentile values (based on the input percentile)
        Mat flat; tmpsplit[i].reshape(1, 1).copyTo(flat);
        cv::sort(flat, flat, CV_SORT_EVERY_ROW + CV_SORT_ASCENDING);
        int lowval = flat.at<uchar>(cvFloor(((float)flat.cols) * half_percent));
        int highval = flat.at<uchar>(cvCeil(((float)flat.cols) * (1.0 - half_percent)));
        cout << lowval << " " << highval << endl;

        //saturate below the low percentile and above the high percentile
        tmpsplit[i].setTo(lowval, tmpsplit[i] < lowval);
        tmpsplit[i].setTo(highval, tmpsplit[i] > highval);

        //scale the channel
        normalize(tmpsplit[i], tmpsplit[i], 0, 255, NORM_MINMAX);
    }
    merge(tmpsplit, out);
}

Or apply CLAHE to BGR image

gameon67
  • 3,981
  • 5
  • 35
  • 61
1

In order to avoid changing the color of your image when stretching the constrast, convert it first to HSV/HSL color space. Then, apply regular constrast stretching in the L or V channel but do not chagen H or S channels.

Alceu Costa
  • 9,733
  • 19
  • 65
  • 83
1

I needed to do the same thing over a big library of video thumbnails. I wanted a solution that would be conservative, so that I didn't have to spot check for thumbnails getting completely trashed. Here's the messy, hacked-together solution I used.

I first used this class to calculate the distribution of colors in an image. I first did one in HSV-colorspace, but found a grayscale-based one was way faster and almost as good:

class GrayHistogram
  def initialize(filename)
    @hist = hist(filename)
    @percentile = {}
  end

  def percentile(x)
    return @percentile[x] if @percentile[x]
    bin = @hist.find{ |h| h[:count] > x }
    c = bin[:color]
    return @percentile[x] ||= c/256.0
  end

  def midpoint
    (percentile(0.25) + percentile(0.75)) / 2.0
  end

  def spread
    percentile(0.75) - percentile(0.25)
  end

private
  def hist(imgFilename)
    histFilename = "/tmp/gray_hist.txt"

    safesystem("convert #{imgFilename} -depth 8 -resize 50% -colorspace GRAY /tmp/out.png")
    safesystem("convert /tmp/out.png -define histogram:unique-colors=true " +
               "        -format \"%c\" histogram:info:- > #{histFilename}")

    f = File.open(histFilename)
    lines = f.readlines[0..-2] # the last line is always blank
    hist = lines.map { |line| { :count => /([0-9]*):/.match(line)[1].to_i, :color => /,([0-9]*),/.match(line)[1].to_i } }
    f.close

    tot = 0
    cumhist = hist.map do |h|
      tot += h[:count]
      {:count=>tot, :color=>h[:color]}
    end
    tot = tot.to_f
    cumhist.each { |h| h[:count] = h[:count] / tot }

    safesystem("rm /tmp/out.png #{histFilename}")

    return cumhist
  end
end

I then created this class to use the histogram to figure out how to correct an image:

def safesystem(str)
  out = `#{str}`
  if $? != 0
    puts "shell command failed:"
    puts "\tcmd: #{str}"
    puts "\treturn code: #{$?}"
    puts "\toutput: #{out}"
    raise
  end
end

def generateHist(thumb, hist)
  safesystem("convert #{thumb} histogram:hist.jpg && mv hist.jpg #{hist}")
end

class ImgCorrector
  def initialize(filename)
    @filename = filename
    @grayHist = GrayHistogram.new(filename)
  end

  def flawClass
    if !@flawClass
      gapLeft  = (@grayHist.percentile(0.10) > 0.13) || (@grayHist.percentile(0.25) > 0.30)
      gapRight = (@grayHist.percentile(0.75) < 0.60) || (@grayHist.percentile(0.90) < 0.80)

      return (@flawClass="low"   ) if (!gapLeft &&  gapRight)
      return (@flawClass="high"  ) if ( gapLeft && !gapRight)
      return (@flawClass="narrow") if ( gapLeft &&  gapRight)
      return (@flawClass="fine"  )
    end
    return @flawClass
  end

  def percentileSummary
    [ @grayHist.percentile(0.10),
      @grayHist.percentile(0.25),
      @grayHist.percentile(0.75),
      @grayHist.percentile(0.90) ].map{ |x| (((x*100.0*10.0).round)/10.0).to_s }.join(', ') +
    "<br />" +
    "spread: " + @grayHist.spread.to_s
  end

  def writeCorrected(filenameOut)
    if flawClass=="fine"
      safesystem("cp #{@filename} #{filenameOut}")
      return
    end

    # spread out the histogram, centered at the midpoint
    midpt = 100.0*@grayHist.midpoint

    # map the histogram's spread to a sigmoidal concept (linearly)
    minSpread = 0.10
    maxSpread = 0.60
    minS = 1.0
    maxS = case flawClass
      when "low"    then 5.0
      when "high"   then 5.0
      when "narrow" then 6.0
    end
    s = ((1.0 - [[(@grayHist.spread - minSpread)/(maxSpread-minSpread), 0.0].max, 1.0].min) * (maxS - minS)) + minS

    #puts "s: #{s}"
    safesystem("convert #{@filename} -sigmoidal-contrast #{s},#{midpt}% #{filenameOut}")
  end
end

I ran it like so:

origThumbs = `find thumbs | grep jpg`.split("\n")
origThumbs.each do |origThumb|
  newThumb = origThumb.gsub(/thumb/, "newthumb")
  imgCorrector = ImgCorrector.new(origThumb)
  imgCorrector.writeCorrected(newThumb)
end
Jim Lindstrom
  • 586
  • 4
  • 7