1

I have a sample image and a target image. I want to transfer the color shades of sample image to target image. Please tell me how to extract the color from sample image.

Here the images:

input source image:

enter image description here

input map for desired output image

enter image description here

output image

enter image description here

Community
  • 1
  • 1
  • 1
    What programming language are you planning to use, are there any restrictions? What have you tried so far? – FriendFX May 17 '15 at 06:51
  • 1
    add link to source image and if you have then also target image example so we see what effect you want to achieve. also look [here](http://stackoverflow.com/a/22891902/2521214) it might help a bit – Spektre May 17 '15 at 07:28
  • Any programming language can be used for this. I am only focusing on logical solution. Thanks for help ....https://copy.com/NiDwQqjFoeEP8vcD –  May 17 '15 at 07:36
  • Added images to your question (hope I get the meaning right) convert the light map to grayscale, take average color from the source image and multiply the light map by it (each color band separately) – Spektre May 17 '15 at 16:28
  • 1
    I tried converting image by calculating mean color values from sample image and then multiplying it to target image before and i got the result as above image but it is not working in every sample image. For some image converted target from extracted mean color is far more different from sample image... –  May 17 '15 at 19:29

5 Answers5

1

You can use a technique called "Histogram matching" (another description)

Basically, you use the histogram for your source image as a goal and transform the values for each input map pixel to get the output histogram as close to source as possible. You do it for each rgb channel of the image.

Here is my python code for that:

from scipy.misc import imsave, imread
import numpy as np

imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
    imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
    tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)

    cdfsrc = imhist.cumsum() #cumulative distribution function
    cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize

    cdftint = tinthist.cumsum() #cumulative distribution function
    cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize


    im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)


    im3 = np.interp(imsrc[:,:,d].flatten(),cdftint, bins[:-1])

    imres[:,:,d] = im3.reshape((imsrc.shape[0],imsrc.shape[1] ))

imsave("histnormresult.jpg", imres)

The output for you samples will look like that:

enter image description here

You could also try making the same in HSV colorspace - it might give better results.

vzaguskin
  • 310
  • 1
  • 9
1

I think the hardest part is to determine the dominant color of the first image. Just looking at it, with all the highlights and shadows, the best overall color will be the one that has the highest combination of brightness and saturation. I start with a blurred image to reduce the effects of noise and other anomalies, then convert each pixel to the HSV color space for the brightness and saturation measurement. Here's how it looks in Python with PIL and colorsys:

blurred = im1.filter(ImageFilter.BLUR)
ld = blurred.load()
max_hsv = (0, 0, 0)
for y in range(blurred.size[1]):
    for x in range(blurred.size[0]):
        r, g, b = tuple(c / 255. for c in ld[x, y])
        h, s, v = colorsys.rgb_to_hsv(r, g, b)
        if s + v > max_hsv[1] + max_hsv[2]:
            max_hsv = h, s, v
r, g, b = tuple(int(c * 255) for c in colorsys.hsv_to_rgb(*max_hsv))

For your image I get a color of (210, 61, 74) which looks like:

(210, 61, 74)

From that point it's just a matter of transferring the hue and saturation to the other image.

Mark Ransom
  • 299,747
  • 42
  • 398
  • 622
1

The histogram matching solutions above did not work for me. Here is my own, based on OpenCV:

def match_image_histograms(image, reference):
    chans1 = cv2.split(image)
    chans2 = cv2.split(reference)

    new_chans = []
    for ch1, ch2 in zip(chans1, chans2):
        hist1 = cv2.calcHist([ch1], [0], None, [256], [0, 256])
        hist1 /= hist1.sum()
        hist2 = cv2.calcHist([ch2], [0], None, [256], [0, 256])
        hist2 /= hist2.sum()
        lut = np.searchsorted(hist1.cumsum(), hist2.cumsum())
        new_chans.append(cv2.LUT(ch1, lut))
    return cv2.merge(new_chans).astype('uint8')
Cris Luengo
  • 55,762
  • 10
  • 62
  • 120
MindV0rtex
  • 336
  • 1
  • 9
0
  1. obtain average color from color map

    ignore saturated white/black colors

  2. convert light map to grayscale

  3. change dynamic range of lightmap to match your desired output

    I use max dynamic range. You could compute the range of color map and set it for light map

  4. multiply the light map by avg color

This is how it looks like:

  • recolor example

And this is the C++ source code

//picture pic0,pic1,pic2;
    // pic0 - source color
    // pic1 - source light map
    // pic2 - output
int x,y,rr,gg,bb,i,i0,i1;
double r,g,b,a;

// init output as source light map in grayscale i=r+g+b
pic2=pic1;
pic2.rgb2i();
// change light map dynamic range to maximum
i0=pic2.p[0][0].dd; // min
i1=pic2.p[0][0].dd; // max
for (y=0;y<pic2.ys;y++)
 for (x=0;x<pic2.xs;x++)
    {
    i=pic2.p[y][x].dd;
    if (i0>i) i0=i;
    if (i1<i) i1=i;
    }
for (y=0;y<pic2.ys;y++)
 for (x=0;x<pic2.xs;x++)
    {
    i=pic2.p[y][x].dd;
    i=(i-i0)*767/(i1-i0);
    pic2.p[y][x].dd=i;
    }
// extract average color from color map (normalized to unit vecotr)
for (r=0.0,g=0.0,b=0.0,y=0;y<pic0.ys;y++)
 for (x=0;x<pic0.xs;x++)
    {
    rr=BYTE(pic0.p[y][x].db[picture::_r]);
    gg=BYTE(pic0.p[y][x].db[picture::_g]);
    bb=BYTE(pic0.p[y][x].db[picture::_b]);
    i=rr+gg+bb;
    if (i<400) // ignore saturated colors (whiteish) 3*255=white
     if (i>16) // ignore too dark colors (whiteish) 0=black
        {
        r+=rr;
        g+=gg;
        b+=bb;
        }
    }
a=1.0/sqrt((r*r)+(g*g)+(b*b)); r*=a; g*=a; b*=a;
// recolor output
for (y=0;y<pic2.ys;y++)
 for (x=0;x<pic2.xs;x++)
    {
    a=DWORD(pic2.p[y][x].dd);
    rr=r*a; if (rr>255) rr=255; pic2.p[y][x].db[picture::_r]=BYTE(rr);
    gg=g*a; if (gg>255) gg=255; pic2.p[y][x].db[picture::_g]=BYTE(gg);
    bb=b*a; if (bb>255) bb=255; pic2.p[y][x].db[picture::_b]=BYTE(bb);
    }

I am using own picture class so here some members:


xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
p[y][x].db[4] is pixel access by color bands (r,g,b,a)

[notes]

If this does not meet your needs then please specify more and add more images. Because your current example is really not self explanatonary

Community
  • 1
  • 1
Spektre
  • 49,595
  • 11
  • 110
  • 380
0

Regarding previous answer, one thing to be careful with: once the CDF will reach its maximum (=1), the interpolation will get mislead and will match wrongly your values. To avoid this, you should provide the interpolation function only the part of CDF meaningful (not after where it reaches 1) and the corresponding bins. Here the answer adapted:

from scipy.misc import imsave, imread
import numpy as np

imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
    for d in range(3):
    imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
    tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)

    cdfsrc = imhist.cumsum() #cumulative distribution function
    cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize

    cdftint = tinthist.cumsum() #cumulative distribution function
    cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize


    im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)

    if (cdftint==1).sum()>0:
            idx_max = np.where(cdftint==1)[0][0]
            im3 = np.interp(im2,cdftint[:idx_max+1], bins[:idx_max+1])
    else:
            im3 = np.interp(im2,cdftint, bins[:-1])

Enjoy!

Frank
  • 1