15

I am trying to blend 2 images so that the seams between them disappear.

1st image:enter image description here

2nd image:enter image description here

if blending NOT applied: enter image description here

if blending applied: enter image description here

I used ALPHA BLENDING; NO seam removed; in fact image STILL SAME BUT DARKER

This is the part where I do the blending

Mat warped1;
warpPerspective(left,warped1,perspectiveTransform,front.size());// Warping may be used for correcting image distortion
imshow("combined1",warped1/2+front/2);
            vector<Mat> imgs;
            imgs.push_back(warped1/2);
            imgs.push_back(front/2);
            double alpha = 0.5; 
            int min_x = ( imgs[0].cols - imgs[1].cols)/2 ;
            int min_y = ( imgs[0].rows -imgs[1].rows)/2 ;
            int width, height;
            if(min_x < 0) {
                min_x = 0; 
                width = (imgs).at(0).cols;
            }
            else         
                width = (imgs).at(1).cols;
            if(min_y < 0) {
                min_y = 0; 
                height = (imgs).at(0).rows - 1;
            }

            else         
                height = (imgs).at(1).rows - 1;
            Rect roi = cv::Rect(min_x, min_y, imgs[1].cols, imgs[1].rows);  
            Mat out_image = imgs[0].clone();
            Mat A_roi= imgs[0](roi);
            Mat out_image_roi = out_image(roi);
            addWeighted(A_roi,alpha,imgs[1],1-alpha,0.0,out_image_roi);
            imshow("foo",imgs[0](roi));
Steph
  • 609
  • 3
  • 13
  • 32
  • can you provide warped images please? wrote a blending function, but I do need warped/aligned images to demonstrate. Please provide aligned images of same size. – Micka Mar 11 '14 at 11:27

4 Answers4

12

I choose to define the alpha value depending on the distance to the "object center", the further the distance from the object center, the smaller the alpha value. The "object" is defined by a mask.

I've aligned the images with GIMP (similar to your warpPerspective). They need to be in same coordinate system and both images must have same size.

My input images look like this:

enter image description here

enter image description here

int main()
{

cv::Mat i1 = cv::imread("blending/i1_2.png");
cv::Mat i2 = cv::imread("blending/i2_2.png");

cv::Mat m1 = cv::imread("blending/i1_2.png",CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat m2 = cv::imread("blending/i2_2.png",CV_LOAD_IMAGE_GRAYSCALE);

    // works too, for background near white
    //  m1 = m1 < 220;
    //  m2 = m2 < 220;

//    edited:  using OTSU thresholding. If not working you have to create your own masks with a better technique
cv::threshold(m1,m1,255,255,cv::THRESH_BINARY_INV|cv::THRESH_OTSU);
cv::threshold(m2,m2,255,255,cv::THRESH_BINARY_INV|cv::THRESH_OTSU);

cv::Mat out = computeAlphaBlending(i1,m1,i2,m2);

cv::waitKey(-1);
return 0;
}

with blending function: needs some comments and optimizations I guess, I'll add them later.

cv::Mat computeAlphaBlending(cv::Mat image1, cv::Mat mask1, cv::Mat image2, cv::Mat mask2)
{
// edited: find regions where no mask is set
// compute the region where no mask is set at all, to use those color values unblended
cv::Mat bothMasks = mask1 | mask2;
cv::imshow("maskOR",bothMasks);
cv::Mat noMask = 255-bothMasks;
// ------------------------------------------

// create an image with equal alpha values:
cv::Mat rawAlpha = cv::Mat(noMask.rows, noMask.cols, CV_32FC1);
rawAlpha = 1.0f;

// invert the border, so that border values are 0 ... this is needed for the distance transform
cv::Mat border1 = 255-border(mask1);
cv::Mat border2 = 255-border(mask2);

// show the immediate results for debugging and verification, should be an image where the border of the face is black, rest is white
cv::imshow("b1", border1);
cv::imshow("b2", border2);

// compute the distance to the object center
cv::Mat dist1;
cv::distanceTransform(border1,dist1,CV_DIST_L2, 3);

// scale distances to values between 0 and 1
double min, max; cv::Point minLoc, maxLoc;

// find min/max vals
cv::minMaxLoc(dist1,&min,&max, &minLoc, &maxLoc, mask1&(dist1>0));  // edited: find min values > 0
dist1 = dist1* 1.0/max; // values between 0 and 1 since min val should alwaysbe 0

// same for the 2nd image
cv::Mat dist2;
cv::distanceTransform(border2,dist2,CV_DIST_L2, 3);
cv::minMaxLoc(dist2,&min,&max, &minLoc, &maxLoc, mask2&(dist2>0));  // edited: find min values > 0
dist2 = dist2*1.0/max;  // values between 0 and 1


//TODO: now, the exact border has value 0 too... to fix that, enter very small values wherever border pixel is set...

// mask the distance values to reduce information to masked regions
cv::Mat dist1Masked;
rawAlpha.copyTo(dist1Masked,noMask);    // edited: where no mask is set, blend with equal values
dist1.copyTo(dist1Masked,mask1);
rawAlpha.copyTo(dist1Masked,mask1&(255-mask2)); //edited

cv::Mat dist2Masked;
rawAlpha.copyTo(dist2Masked,noMask);    // edited: where no mask is set, blend with equal values
dist2.copyTo(dist2Masked,mask2);
rawAlpha.copyTo(dist2Masked,mask2&(255-mask1)); //edited

cv::imshow("d1", dist1Masked);
cv::imshow("d2", dist2Masked);

// dist1Masked and dist2Masked now hold the "quality" of the pixel of the image, so the higher the value, the more of that pixels information should be kept after blending
// problem: these quality weights don't build a linear combination yet

// you want a linear combination of both image's pixel values, so at the end you have to divide by the sum of both weights
cv::Mat blendMaskSum = dist1Masked+dist2Masked;
//cv::imshow("blendmask==0",(blendMaskSum==0));

// you have to convert the images to float to multiply with the weight
cv::Mat im1Float;
image1.convertTo(im1Float,dist1Masked.type());
cv::imshow("im1Float", im1Float/255.0);

// TODO: you could replace those splitting and merging if you just duplicate the channel of dist1Masked and dist2Masked
// the splitting is just used here to use .mul later... which needs same number of channels
std::vector<cv::Mat> channels1;
cv::split(im1Float,channels1);
// multiply pixel value with the quality weights for image 1
cv::Mat im1AlphaB = dist1Masked.mul(channels1[0]);
cv::Mat im1AlphaG = dist1Masked.mul(channels1[1]);
cv::Mat im1AlphaR = dist1Masked.mul(channels1[2]);

std::vector<cv::Mat> alpha1;
alpha1.push_back(im1AlphaB);
alpha1.push_back(im1AlphaG);
alpha1.push_back(im1AlphaR);
cv::Mat im1Alpha;
cv::merge(alpha1,im1Alpha);
cv::imshow("alpha1", im1Alpha/255.0);

cv::Mat im2Float;
image2.convertTo(im2Float,dist2Masked.type());

std::vector<cv::Mat> channels2;
cv::split(im2Float,channels2);
// multiply pixel value with the quality weights for image 2
cv::Mat im2AlphaB = dist2Masked.mul(channels2[0]);
cv::Mat im2AlphaG = dist2Masked.mul(channels2[1]);
cv::Mat im2AlphaR = dist2Masked.mul(channels2[2]);

std::vector<cv::Mat> alpha2;
alpha2.push_back(im2AlphaB);
alpha2.push_back(im2AlphaG);
alpha2.push_back(im2AlphaR);
cv::Mat im2Alpha;
cv::merge(alpha2,im2Alpha);
cv::imshow("alpha2", im2Alpha/255.0);

// now sum both weighted images and divide by the sum of the weights (linear combination)
cv::Mat imBlendedB = (im1AlphaB + im2AlphaB)/blendMaskSum;
cv::Mat imBlendedG = (im1AlphaG + im2AlphaG)/blendMaskSum;
cv::Mat imBlendedR = (im1AlphaR + im2AlphaR)/blendMaskSum;
std::vector<cv::Mat> channelsBlended;
channelsBlended.push_back(imBlendedB);
channelsBlended.push_back(imBlendedG);
channelsBlended.push_back(imBlendedR);

// merge back to 3 channel image
cv::Mat merged;
cv::merge(channelsBlended,merged);

// convert to 8UC3
cv::Mat merged8U;
merged.convertTo(merged8U,CV_8UC3);

return merged8U;
}

and helper function:

cv::Mat border(cv::Mat mask)
{
cv::Mat gx;
cv::Mat gy;

cv::Sobel(mask,gx,CV_32F,1,0,3);
cv::Sobel(mask,gy,CV_32F,0,1,3);

cv::Mat border;
cv::magnitude(gx,gy,border);

return border > 100;
}

with result:

enter image description here

edit: forgot a function ;) edit: now keeping original background

Micka
  • 19,585
  • 4
  • 56
  • 74
  • 1
    Hi @Micka. I have a query: I tried blending the same images above with their original background. Their original background is kind of white,put not pure white. So,the result is like the original one in the question-it's as if the blending did not take place. I am thinking it has to do with m1 = m1 > 0/255 or m2 = m2 > 0/255 because their background is neither white or black. Can you advise me please?I want to keep the original background – Steph Mar 20 '14 at 13:42
  • 1
    @Steph , I adjusted the algorithm to keep the background (which it didnt). But please keep in mind, that blending quality depends on the quality of the input masks, which should look like the first two images of the answer of Haris . If computing the mask with the added `cv::threshold` (previously m1 = m1>0) does not give good results you will have to use better functions to extract your target objects from the single images (otherwise it's not clear what/where the blending should be done). – Micka Mar 21 '14 at 20:10
  • Hi @Micka. The background above are not the original ones. I get this when thresholding in the main http://i.imgur.com/We45tSZ.jpg and final result is http://i.imgur.com/mdxUSHe.jpg :/ – Steph Mar 22 '14 at 01:15
  • i tried using the mask of Haris below.It generated the mask and the panoramic face is fine except that it has a small dark dot in the front face. But if I have other faces,it wont work because the dimension of the face for every person is not same.Because I tried it on another face,it does not generate any mask :/ – Steph Mar 22 '14 at 02:20
  • 1
    Did you try to extract the outermost contour? Maybe that would be a good mask border... – Micka Mar 22 '14 at 09:56
  • 1
    Maybe you'll have to read literature about image segmentation. – Micka Mar 22 '14 at 09:59
  • I was reading about poisson blending and it says that it splits channels of the source and target image and finally blends them together. Is the above method an implementation of poisson @Micka? – Steph Mar 25 '14 at 19:43
  • No it is not. It's simple alpha blending. Splitting the channels is used there only for multiplication with the 1-channel alpha map – Micka Mar 25 '14 at 19:57
  • Hi @Micka could you be clearer about why splitting each image into its separate b,g and r channels necessary?i understand that the blue channel of one image is combined with the blue channel of the other image etc,but why do this?I have read the paper of Duff & Porter-Compositing digital images but I could not understand it. And what is the 1-channel alpha map?Is it a map of all the colors(b,g and r) combined?thankyou – Steph Mar 28 '14 at 13:14
  • it isn't necessary, but openCV allows you to only multiply matrices per-element if they are same size. So there are two different ways: 1. create your matrix with alpha values in the way that it has 3 channels (with same alpha value for each channel of a pixel) and multiply that with the whole image, or 2. create alpha value matrix with one channel only and split the image channels to multiply 1-channel matrices. Afair I commented that in the sourcecode too, that it might be more efficient to duplicate the alpha value channel, but I don't know openCV syntax for that so I was too lazy to search – Micka Mar 28 '14 at 13:20
  • But what is the role of this alpha value?and what is the theory behind it? – Steph Mar 28 '14 at 13:54
  • the idea is to multiplay a pixel of the first image with `alpha` and the pixel of the second image with `1-alpha`. That guarantees to have a linear combination of both pixel. `Alpha` should depend on the quality of the pixel in the first image compared to the quality of the pixel in the second image. Quality is defined here as the distance to the face center, so the further away from the face center, a pixel loses quality and the pixel of the other image will get increased alpha values. In my code, alpha values are `distance1/(distance1+distance2)` and `(1-distance1)/(distance1+distance2)` – Micka Mar 28 '14 at 14:02
  • @Step have a look at http://graphics.cs.cmu.edu/courses/15-463/2010_spring/Lectures/blending.pdf which looks quite good. My technique actually is like shown on pages/slides 5-11. – Micka Mar 28 '14 at 14:10
  • short term would be `center-weighted feathering in alpha blending` maybe. – Micka Mar 28 '14 at 14:15
  • I came across this lecture!seriously,I cant understand:/ there was no variable named alpha or specific value assigned to it, that is why i cant understand the meaning. I understood the algorithm upto "blendMaskSum". Afterwards the splitting part nopes :s – Steph Mar 28 '14 at 14:23
  • it's just `img1*(dist1/(dist1+dist2)) + img2*(dist2/dist1+dist2)` which is always a linear combination (if at least one of dist1/dist isn't zero) and nearly equivalent to `img1*(alpha) + img2*(1-alpha)`. That's all what happens with all the splitting and merging. – Micka Mar 28 '14 at 14:34
  • i think i understood what you are trying to say..but nowhere in your code there's this formula :(dist1/(dist1+dist2)) or this one (dist2/dist1+dist2) – Steph Mar 28 '14 at 15:36
  • `img1*dist1` is in the lines `im1AlphaB|G|R = dist1Masked.mul(channels1[0|1|2])` same for `img2`. And finally the addition and division: `cv::Mat imBlendedB|G|R = (im1AlphaB|G|R + im2AlphaB|G|R)/blendMaskSum` – Micka Mar 28 '14 at 19:55
  • hi micka!it's me again.I was trying out the original blending code that you wrote on different types of images. Here are the 3 images that I tried (http://i.imgur.com/6tR8oD2.jpg)(http://i.imgur.com/b5wKBKZ.jpg)(http://i.imgur.com/HPj1H6D.jpg)..they are already aligned. The final result however is not that good http://i.imgur.com/Q0UDT6M.jpg Do you have any suggestions how I can correct the final output please? – Steph Mar 31 '14 at 08:42
  • since the overlap-fit isn't really good (neither eyes nor mouth are really correct positioned) you might want to make all blending masks much smaller so that smaller parts of the images are blended and more information of each image is taken without blending => the front image should overwrite big parts of the other images. – Micka Mar 31 '14 at 09:34
  • can you tell me name or link to the original paper? – Micka Mar 31 '14 at 09:35
  • i dont know how to make the blending masks smaller :s here is the paper https://www.utdallas.edu/~herve/abdi-ypam2005.pdf – Steph Mar 31 '14 at 09:37
  • reducing mask size does make it a little bit better, but not really good. Maybe the alpha values computed from distance aren't reflecting the quality good enough. the aim would be to compute the alpha values in the way that they are equal to 1 in most parts of each image, but go to zero fast near the image border. maybe I can find the time to implement that, but maybe not today. – Micka Mar 31 '14 at 09:54
  • i just need to get a working version.ok when will you be able to do that?its tiring to each time discover something new when using different images – Steph Mar 31 '14 at 10:04
  • hard to tell whether all images will work, strongly depends on the quality of images etc. can't tell whether or when I will have time, sorry. Maybe you want to implement pyramid blending from the linked lecture... – Micka Mar 31 '14 at 10:14
  • @Steph I've had a new idea for blending in your special task: you have information with those colored points, which parts of the image should contain information from the left image and which parts should contain information of the right image. in addition, all that's inside the 6 colored points of the front image should not be blended but just used unblended from the central image. I'll add code later in a new answer! – Micka Apr 03 '14 at 15:41
  • okayy this is not quite clear but if you add the code maybe :) – Steph Apr 03 '14 at 16:07
  • This is amazing! @Micka I had a quick question regarding the line cv::Mat im1AlphaB = dist1Masked.mul(channels1[0]); I am getting an error saying that the input arrays sizes don't match. To my understand the lines std::vector channels1; cv::split(im1Float,channels1); are supposed to account for this but that doesn't appear to be the case. Using OpenCV 3.2. Any help is appreciated. – C.Radford Sep 03 '17 at 16:56
  • can you print and post channel numbers and types of dist1Masked, channels1[0] and im1Float? Maybe something changed between 2.4 and 3.3 – Micka Sep 03 '17 at 17:13
  • @Micka OpenCV version : 3.2.0-dev im1Float chan : 3 dist1Masked chan : 1 channels1[0] chan : 1. Also thank you kindly for the amazing prompt reply:) – C.Radford Sep 03 '17 at 17:20
  • can you make sure that dist1Masked and channels1[0] have same type? Maybe distance transform is double precision in 3.2 or something. And/or post the error message please – Micka Sep 03 '17 at 17:29
  • OpenCV version : 3.2.0-dev im1Float chan : 3 dist1Masked chan : 1 channels1[0] chan : 1 dist1Masked type : 5 channels1[0] type : 5 OpenCV Error: Sizes of input arguments do not match (The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array') in arithm_op. The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function arithm_op – C.Radford Sep 03 '17 at 17:31
  • ok, compare number of rows and cols, too. Maybe distance transform now is reduced at the border. – Micka Sep 03 '17 at 17:36
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/153563/discussion-between-c-radford-and-micka). – C.Radford Sep 03 '17 at 17:37
7
  1. First create a Mask image from your input image, this can be done by thresholding the source image and perform bitwise_and between them.

  2. Now copy the addweighted result to a new mat using above mask.

enter image description here enter image description here

enter image description here enter image description here
enter image description here

In the below code I haven’t used warpPerspective instead I used ROI on both image to align correctly.

Mat left=imread("left.jpg");
Mat front=imread("front.jpg");
int x=30, y=10, w=240, h=200, offset_x=20, offset_y=6;
Mat leftROI=left(Rect(x,y,w,h));
Mat frontROI=front(Rect(x-offset_x,y+offset_y,w,h)); 

//create mask
Mat gray1,thr1;
cvtColor(leftROI,gray1,CV_BGR2GRAY);
threshold( gray1, thr1,190, 255,CV_THRESH_BINARY_INV );
Mat gray2,thr2;
cvtColor(frontROI,gray2,CV_BGR2GRAY);
threshold( gray2, thr2,190, 255,CV_THRESH_BINARY_INV );
Mat mask;
bitwise_and(thr1,thr2,mask);

//perform add weighted and copy using mask
Mat add;
double alpha=.5;
double beta=.5;
addWeighted(frontROI,alpha,leftROI,beta,0.0,add,-1);
Mat dst(add.rows,add.cols,add.type(),Scalar::all(255));
add.copyTo(dst,mask);
imshow("dst",dst);
Haris
  • 13,645
  • 12
  • 90
  • 121
  • Hi @Haris. I have a query: the values that you put in x,y,h,offset_x and offset_y are for only the face above right?For other faces,the dimensions wont be the same. How can I generate the mask for different people without each time changing those values? – Steph Mar 22 '14 at 01:50
  • @Steph You can create mask before ROI, but for final blending you need to align your mask properly, here I done it manually by taking the green dots as common points. You can done it manually by finding the center point of green dot(contour center of mass) on both image and taking that value as offset, but your both input image should contain some markings as above. – Haris Mar 23 '14 at 05:14
  • Hi Haris!actually I have already aligned the images based on the center of mass of the markers. I only need to find their mask. How can I do this? – Steph Mar 23 '14 at 05:33
  • Find their mask means, find mask for the face with proper alignment ? – Haris Mar 23 '14 at 06:06
  • In the accepted answer above,the function computeAlphaBlending() takes as argument the aligned images and their respective masks. I need to find their masks after the alignment has been done. Only their masks – Steph Mar 23 '14 at 06:41
  • Hi Steph, You can easily find out this my taking a single common marker image from both image, then check x and y, if x,y(center point) of both markers are same the face are aligned, if no find the difference between x and y and takes these value as offsets, then shift crop the image using ROI. – Haris Mar 24 '14 at 02:24
  • no @Haris. I have ALREADY aligned the images. see my problem here http://stackoverflow.com/questions/22427550/face-mask-in-opencv – Steph Mar 24 '14 at 03:17
5

Ok. here's a new try which might only work for your specific task to blend exactly 3 images of those faces, front, left, right.

I use these inputs:

front (i1):

enter image description here

left (i2):

enter image description here

right (i3):

enter image description here

front mask (m1): (optional):

enter image description here

the problem with these images is, that the front image only covers a small part, while left and right overlap the whole front image, which leads to poor blending in my other solution. In addition, the alignment of the images isn't so great (mostly due to perspective effects), so that blending artifacts can occur.

Now the idea of this new method is, that you definitely want to keep the parts of the front image, which lie inside the area spanned by your colored "marker points", these should not be blended. The further you go away from that marker area, the more information from the left and right images should be used, so we create a mask with alpha values, which linearly lowers from 1 (inside the marker area) to 0 (at some defined distance from the marker region).

So the region spanned by the markers is this one:

enter image description here

since we know that the left image is basically used in the region left from the left marker triangle, we can create masks for left and right image, which are used to find the region which should in addition be covered by the front image:

left:

enter image description here

right:

enter image description here

front marker region and everything that is not in left and not in right mask:

enter image description here

this can be masked with the optional front mask input, this is better because this front image example doesnt cover the whole image, but sadly only a part of the image.

enter image description here

now this is the blending mask, with linear decreasing alpha value until the distance to the mask is 10 or more pixel:

enter image description here

now we first create the image covering only left and right image, copying most parts unblended, but blend the parts uncovered by left/right masks with 0.5*left + 0.5*right

blendLR:

enter image description here

finally we blend the front image in that blendLR by compution:

blended = alpha*front + (1-alpha)*blendLR

enter image description here

some improvements might include to caluculate the maxDist value from some higher information (like the size of the overlap or the size from the marker triangles to the border of the face).

another improvement would be to not compute 0.5*left + 0.5*right but to do some alpha blending here too, taking more information from the left image the further left we are in the gap. This would reduce the seams in the middle of the image (on top and bottom of the front image part).

// idea: keep all the pixels from front image that are inside your 6 points area always unblended:
cv::Mat blendFrontAlpha(cv::Mat front, cv::Mat left, cv::Mat right, std::vector<cv::Point> sixPoints, cv::Mat frontForeground = cv::Mat())
{
// define some maximum distance. No information of the front image is used if it's further away than that maxDist.
// if you have some real masks, you can easily set the maxDist according to the dimension of that mask - dimension of the 6-point-mask
float maxDist = 10;

// to use the cv function to draw contours we must order it like this:
std::vector<std::vector<cv::Point> > contours;
contours.push_back(sixPoints);

// create the mask
cv::Mat frontMask = cv::Mat::zeros(front.rows, front.cols, CV_8UC1);

// draw those 6 points connected as a filled contour
cv::drawContours(frontMask,contours,0,cv::Scalar(255),-1);

// add "lines": everything left from the points 3-4-5 might be used from left image, everything from the points 0-1-2 might be used from the right image:
cv::Mat leftMask = cv::Mat::zeros(front.rows, front.cols, CV_8UC1);
{
    cv::Point2f center = cv::Point2f(sixPoints[3].x, sixPoints[3].y);

    float steigung = ((float)sixPoints[5].y - (float)sixPoints[3].y)/((float)sixPoints[5].x - (float)sixPoints[3].x);
    if(sixPoints[5].x - sixPoints[3].x == 0) steigung = 2*front.rows;

    float n = center.y - steigung*center.x;

    cv::Point2f top = cv::Point2f( (0-n)/steigung , 0);
    cv::Point2f bottom = cv::Point2f( (front.rows-1-n)/steigung , front.rows-1);

    // now create the contour of the left image:
    std::vector<cv::Point> leftMaskContour;
    leftMaskContour.push_back(top);
    leftMaskContour.push_back(bottom);
    leftMaskContour.push_back(cv::Point(0,front.rows-1));
    leftMaskContour.push_back(cv::Point(0,0));

    std::vector<std::vector<cv::Point> > leftMaskContours;
    leftMaskContours.push_back(leftMaskContour);
    cv::drawContours(leftMask,leftMaskContours,0,cv::Scalar(255),-1);

    cv::imshow("leftMask", leftMask);

    cv::imwrite("x_leftMask.png", leftMask);
}

// add "lines": everything left from the points 3-4-5 might be used from left image, everything from the points 0-1-2 might be used from the right image:
cv::Mat rightMask = cv::Mat::zeros(front.rows, front.cols, CV_8UC1);
{
    // add "lines": everything left from the points 3-4-5 might be used from left image, everything from the points 0-1-2 might be used from the right image:
    cv::Point2f center = cv::Point2f(sixPoints[2].x, sixPoints[2].y);

    float steigung = ((float)sixPoints[0].y - (float)sixPoints[2].y)/((float)sixPoints[0].x - (float)sixPoints[2].x);
    if(sixPoints[0].x - sixPoints[2].x == 0) steigung = 2*front.rows;

    float n = center.y - steigung*center.x;

    cv::Point2f top = cv::Point2f( (0-n)/steigung , 0);
    cv::Point2f bottom = cv::Point2f( (front.rows-1-n)/steigung , front.rows-1);

    std::cout << top << " - " << bottom << std::endl;

    // now create the contour of the left image:
    std::vector<cv::Point> rightMaskContour;
    rightMaskContour.push_back(cv::Point(front.cols-1,0));
    rightMaskContour.push_back(cv::Point(front.cols-1,front.rows-1));
    rightMaskContour.push_back(bottom);
    rightMaskContour.push_back(top);

    std::vector<std::vector<cv::Point> > rightMaskContours;
    rightMaskContours.push_back(rightMaskContour);
    cv::drawContours(rightMask,rightMaskContours,0,cv::Scalar(255),-1);

    cv::imshow("rightMask", rightMask);
    cv::imwrite("x_rightMask.png", rightMask);
}

// add everything that's not in the side masks to the front mask:
cv::Mat additionalFrontMask = (255-leftMask) & (255-rightMask);
// if we know more about the front face, use that information:
cv::imwrite("x_frontMaskIncreased1.png", frontMask + additionalFrontMask);
if(frontForeground.cols)
{
    // since the blending mask is blended for maxDist distance, we have to erode this mask here.
    cv::Mat tmp;
    cv::erode(frontForeground,tmp,cv::Mat(),cv::Point(),maxDist);
    // idea is to only use the additional front mask in those areas where the front image contains of face and not those background parts.
    additionalFrontMask = additionalFrontMask & tmp;
}
frontMask = frontMask + additionalFrontMask;
cv::imwrite("x_frontMaskIncreased2.png", frontMask);

//todo: add lines
cv::imshow("frontMask", frontMask);

// for visualization only:
cv::Mat frontMasked;
front.copyTo(frontMasked, frontMask);
cv::imshow("frontMasked", frontMasked);

cv::imwrite("x_frontMasked.png", frontMasked);

// compute inverse of mask to take it as input for distance transform:
cv::Mat inverseFrontMask = 255-frontMask;

// compute the distance to the mask, the further away from the mask, the less information from the front image should be used:
cv::Mat dist;
cv::distanceTransform(inverseFrontMask,dist,CV_DIST_L2, 3);

// scale wanted values between 0 and 1:
dist /= maxDist;
// remove all values > 1; those values are further away than maxDist pixel from the 6-point-mask
dist.setTo(cv::Scalar(1.0f), dist>1.0f);
// now invert the values so that they are == 1 inside the 6-point-area and go to 0 outside:
dist = 1.0f-dist;


cv::Mat alphaValues = dist;
//cv::Mat alphaNonZero = alphaValues > 0;
// now alphaValues contains your general blendingMask.
// but to use it on colored images, we need to duplicate the channels:
std::vector<cv::Mat> singleChannels;
singleChannels.push_back(alphaValues);
singleChannels.push_back(alphaValues);
singleChannels.push_back(alphaValues);
// merge all the channels:
cv::merge(singleChannels, alphaValues);

cv::imshow("alpha mask",alphaValues);
cv::imwrite("x_alpha_mask.png", alphaValues*255);

// convert all input mats to floating point mats:
front.convertTo(front,CV_32FC3);
left.convertTo(left,CV_32FC3);
right.convertTo(right,CV_32FC3);


cv::Mat result;
// first: blend left and right both with 0.5 to the result, this gives the correct results for the intersection of left and right equally weighted.
// TODO: these values could be blended from left to right, giving some finer results
cv::addWeighted(left,0.5,right,0.5,0, result);

// now copy all the elements that are included in only one of the masks (not blended, just 100% information)
left.copyTo(result,leftMask & (255-rightMask));
right.copyTo(result,rightMask & (255-leftMask));

cv::imshow("left+right", result/255.0f);
cv::imwrite("x_left_right.png", result);

// now blend the front image with it's alpha blending mask:
cv::Mat result2 = front.mul(alphaValues) + result.mul(cv::Scalar(1.0f,1.0f,1.0f)-alphaValues);

cv::imwrite("x_front_blend.png", front.mul(alphaValues));

cv::imshow("inv", cv::Scalar(1.0f,1.0f,1.0f)-alphaValues);
cv::imshow("f a", front.mul(alphaValues)/255.0f);
cv::imshow("f r", (result.mul(cv::Scalar(1.0f,1.0f,1.0f)-alphaValues))/255.0f);



result2.convertTo(result2, CV_8UC3);
return result2;


}

int main()
{
// front image
cv::Mat i1 = cv::imread("blending/new/front.jpg");
// left image
cv::Mat i2 = cv::imread("blending/new/left.jpg");
// right image
cv::Mat i3 = cv::imread("blending/new/right.jpg");

// optional: mask of front image
cv::Mat m1 = cv::imread("blending/new/mask_front.png",CV_LOAD_IMAGE_GRAYSCALE);

cv::imwrite("x_M1.png", m1);

// these are the marker points you detect in the front image.
// the order is important. the first three pushed points are the right points (left part of the face) in order from top to bottom
// the second three points are the ones from the left image half, in order from bottom to top
// check coordinates for those input images to understand the ordering!
std::vector<cv::Point> frontArea;
frontArea.push_back(cv::Point(169,92));
frontArea.push_back(cv::Point(198,112));
frontArea.push_back(cv::Point(169,162));
frontArea.push_back(cv::Point(147,162));
frontArea.push_back(cv::Point(122,112));
frontArea.push_back(cv::Point(147,91));

// first parameter is the front image, then left (right face half), then right (left half of face), then the image polygon and optional the front image mask (which contains all facial parts of the front image)
cv::Mat result = blendFrontAlpha(i1,i2,i3, frontArea, m1);


cv::imshow("RESULT", result);
cv::imwrite("x_Result.png", result);

cv::waitKey(-1);

return 0;

}
Micka
  • 19,585
  • 4
  • 56
  • 74
  • @steph I think with those input images and for that alignment you can't expect much better automated results with easy alpha blending techniques. – Micka Apr 04 '14 at 13:26
  • thanks a lot micka!i will try this and definitely try to improve it:) – Steph Apr 05 '14 at 11:55
  • @Micka I am trying to do something same , can you please help me out in telling that how multiply blending can be done ? like we did in photoshop – AHF Apr 24 '14 at 06:59
  • sorry, I don't use photoshop, so no idea what you want to achieve. Can you give examples? – Micka Apr 24 '14 at 09:38
3

In order to avoid making the faces transparent outside their intersection, you cannot use a single alpha value for the whole image.

For instance, you need to use alpha=0.5 in the intersection of img[0] and img[1], alpha=1 in the region where img[1]=0 and alpha=0 in the region where img[0]=0.

This example is the easy approach, but it won't completely remove the seams. If you want that, you have to adapt alpha more intelligently based on image content. You can have a look at the numerous research articles on that topic, but this is not a trivial task:

  • "Seamless image stitching in the gradient domain", by Levin, Zomet Peleg & Weiss, ECCV 2004 (link)

  • "Seamless stitching using multi-perspective plane sweep", by Kang, Szeliski & Uyttendaele, 2004 (link)

BConic
  • 8,750
  • 2
  • 29
  • 55