3

I'm trying to reproduce Photoshop's multiply blend mode in OpenCV. Equivalents to this would be what you find in GIMP, or when you use the CIMultiplyBlendMode in Apple's CoreImage framework.

Everything I read online suggests that multiply blending is accomplished simply by multiplying the channels of the two input images (i.e., Blend = AxB). And, this works, except for the case(s) where alpha is < 1.0.

You can test this very simply in GIMP/PhotoShop/CoreImage by creating two layers/images, filling each with a different solid color, and then modifying the opacity of the first layer. (BTW, when you modify alpha, the operation is no longer commutative in GIMP for some reason.)

A simple example: if A = (0,0,0,0) and B = (0.4,0,0,1.0), and C = AxB, then I would expect C to be (0,0,0,0). This is simple multiplication. But this is not how this blend is implemented in practice. In practice, C = (0.4,0,0,1.0), or C = B.

The bottom line is this: I need to figure out the formula for the multiply blend mode (which is clearly more than AxB) and then implement it in OpenCV (which should be trivial once I have the formula).

Would appreciate any insights.

Also, for reference, here are some links which show multiply blend as being simply AxB:

How does photoshop blend two images together

Wikipedia - Blend Modes

Photoshop Blend Modes

Community
  • 1
  • 1
joelg
  • 1,094
  • 9
  • 19

2 Answers2

3

I managed to sort this out. Feel free to comment with any suggested improvements.

First, I found a clue as to how to implement the multiply function in this post:

multiply blending

And here's a quick OpenCV implementation in C++.

Mat MultiplyBlend(const Mat& cvSource, const Mat& cvBackground) {

// assumption: cvSource and cvBackground are of type CV_8UC4

// formula: (cvSource.rgb * cvBackground.rgb * cvSource.a) + (cvBackground.rgb * (1-cvSource.a))
Mat cvAlpha(cvSource.size(), CV_8UC3, Scalar::all(0));
Mat input[] = { cvSource };
int from_to[] = { 3,0, 3,1, 3,2 };
mixChannels(input, 1, &cvAlpha, 1, from_to, 3);

Mat cvBackgroundCopy;
Mat cvSourceCopy;
cvtColor(cvSource, cvSourceCopy, CV_RGBA2RGB);
cvtColor(cvBackground, cvBackgroundCopy, CV_RGBA2RGB);

// A = cvSource.rgb * cvBackground.rgb * cvSource.a
Mat cvBlendResultLeft;
multiply(cvSourceCopy, cvBackgroundCopy, cvBlendResultLeft, 1.0 / 255.0);
multiply(cvBlendResultLeft, cvAlpha, cvBlendResultLeft, 1.0 / 255.0);
delete(cvSourceCopy);

// invert alpha
bitwise_not(cvAlpha, cvAlpha);

// B = cvBackground.rgb * (1-cvSource.a)
Mat cvBlendResultRight;
multiply(cvBackgroundCopy, cvAlpha, cvBlendResultRight, 1.0 / 255.0);
delete(cvBackgroundCopy, cvAlpha);

// A + B
Mat cvBlendResult;
add(cvBlendResultLeft, cvBlendResultRight, cvBlendResult);
delete(cvBlendResultLeft, cvBlendResultRight);

cvtColor(cvBlendResult, cvBlendResult, CV_RGB2RGBA);

return cvBlendResult;
}
Community
  • 1
  • 1
joelg
  • 1,094
  • 9
  • 19
  • I have 2 comments: 1) You should not use `Mat*`. 2) nice solution :D – Miki Sep 19 '15 at 02:45
  • Thanks @Miki. Why should I not use a pointer? Is it simply because it is now the calling method's responsibility to delete it? – joelg Sep 19 '15 at 20:44
  • 1
    Because `Mat`s are already reference counted internally and that allows for efficient copy and data sharing. Using pointers you're likely to break internal consistency. Also, in general, allocating objects on the stack is faster and less error prone (no need to free the memory). – Miki Sep 19 '15 at 20:48
  • So if you return a Mat, the internal data is not being copied? That's what I was trying to avoid (sorry... new to OpenCV). Would it not be similar to passing a Mat (not Mat&) as a parameter to a method? – joelg Sep 19 '15 at 20:50
  • 1
    Have a look [here](http://stackoverflow.com/questions/23468537/differences-of-using-const-cvmat-cvmat-cvmat-or-const-cvmat). Copying a Mat just copies the header, not the data. Using Mat& you don't copy even the header. To copy also the data (deep copy) you need to call clone() – Miki Sep 19 '15 at 20:55
  • Thanks. I'll give it a read. What seems strange to me is that the Mat you've allocated on the stack (which you return at the end: dst3b) would be lost after the method execution. But hey, if OpenCV handles that, then great! – joelg Sep 19 '15 at 21:57
  • 1
    Try to follow: dst3b is created (refcount = 1). The return statement copies (only the header) it to blend (refcount = 2). The function terminates and dst3b destructor is called (refcount = 1). So the matrix is still "alive" in the main. Once the main terminates, blend destructor is called (refcount = 0), and so data is released. This in theory, in practice RVO (return value optimization) will avoid the intermediate copy and the first destructor call. – Miki Sep 19 '15 at 22:01
  • Yup, that makes sense. Your last sentence should say "This is in theory..." (just for posterity's sake). – joelg Sep 21 '15 at 14:06
  • Well, not a native english speaker here... Glad that the sense was clear enough :D – Miki Sep 21 '15 at 14:22
3

Here is an OpenCV solution based the source code of GIMP, specifically the function gimp_operation_multiply_mode_process_pixels.

NOTE

  • Instead of looping on all pixels it can be vectorized, but I followed the steps of GIMP.
  • Input images must be of type CV_8UC3 or CV_8UC4.
  • it supports also the opacity value, that must be in [0, 255]
  • in the original GIMP implementation there is also the support for a mask. It can be trivially added to the code, eventually.
  • This implementation is in fact not symmetrical, and reproduce your strange behaviour.

Code:

#include <opencv2\opencv.hpp>
using namespace cv;

Mat blend_multiply(const Mat& level1, const Mat& level2, uchar opacity)
{
    CV_Assert(level1.size() == level2.size());
    CV_Assert(level1.type() == level2.type());
    CV_Assert(level1.channels() == level2.channels());

    // Get 4 channel float images
    Mat4f src1, src2;

    if (level1.channels() == 3)
    {
        Mat4b tmp1, tmp2;
        cvtColor(level1, tmp1, COLOR_BGR2BGRA);
        cvtColor(level2, tmp2, COLOR_BGR2BGRA);
        tmp1.convertTo(src1, CV_32F, 1. / 255.);
        tmp2.convertTo(src2, CV_32F, 1. / 255.);
    }
    else
    {
        level1.convertTo(src1, CV_32F, 1. / 255.);
        level2.convertTo(src2, CV_32F, 1. / 255.);
    }

    Mat4f dst(src1.rows, src1.cols, Vec4f(0., 0., 0., 0.));

    // Loop on every pixel

    float fopacity = opacity / 255.f;
    float comp_alpha, new_alpha;

    for (int r = 0; r < src1.rows; ++r)
    {
        for (int c = 0; c < src2.cols; ++c)
        {
            const Vec4f& v1 = src1(r, c);
            const Vec4f& v2 = src2(r, c);
            Vec4f& out = dst(r, c);

            comp_alpha = min(v1[3], v2[3]) * fopacity;
            new_alpha = v1[3] + (1.f - v1[3]) * comp_alpha;

            if ((comp_alpha > 0.) && (new_alpha > 0.))
            {
                float ratio = comp_alpha / new_alpha;

                out[0] = max(0.f, min(v1[0] * v2[0], 1.f)) * ratio + (v1[0] * (1.f - ratio));
                out[1] = max(0.f, min(v1[1] * v2[1], 1.f)) * ratio + (v1[1] * (1.f - ratio));
                out[2] = max(0.f, min(v1[2] * v2[2], 1.f)) * ratio + (v1[2] * (1.f - ratio));
            }
            else
            {
                out[0] = v1[0];
                out[1] = v1[1];
                out[2] = v1[2];
            }

            out[3] = v1[3];

        }
    }

    Mat3b dst3b;
    Mat4b dst4b;
    dst.convertTo(dst4b, CV_8U, 255.);
    cvtColor(dst4b, dst3b, COLOR_BGRA2BGR);

    return dst3b;
}

int main()
{
    Mat3b layer1 = imread("path_to_image_1");
    Mat3b layer2 = imread("path_to_image_2");

    Mat blend = blend_multiply(layer1, layer2, 255);

    return 0;
}
Miki
  • 40,887
  • 13
  • 123
  • 202
  • Thanks for this! I was trying to avoid looping through every pixel, but I suppose I'd need to profile both solutions to see if this is really any slower. – joelg Sep 19 '15 at 20:48
  • I don't think that efficiency is an issue here. Explicitly or inside OpenCV functions you need to scan the matrix anyway. If efficiency is an issue, I (we) can come up with another solution. I keep this layout to be consistent with GIMP implementation. – Miki Sep 19 '15 at 20:52
  • @joelg however, let me know if this function works as expected :D – Miki Sep 19 '15 at 20:58
  • sure, I'll do that when I get a chance. But, for example, I had an overlay blend implementation that iterated over each pixel and computed the result. When I refactored it to only use OpenCV functions (add, multiply, bitwise_not, etc.), I cut the processing time by two-thirds. I'm guessing there must be some built-in optimization when using the OpenCV operations, but I haven't looked over the source code (yet). – joelg Sep 22 '15 at 17:05
  • Yeah, this implementation is not designed for efficiency. But probably using pointers instead of index based-access will work quite good as well. _First make it work, and then make it work faster!_ – Miki Sep 22 '15 at 17:10