3

I have a layer where I want the user to draw a 'mask' for cutting out images. It is semi-opaque so that they can see beneath what they are selecting.

How can I process this so that the drawing data has an alpha of 1.0, but retain the alpha channel (for masking)?

TL:DR - I'd like the black area to be a solid, single colour.

  • Here is the desired before and after (the white background should be transparent in both): Desired Before & After

something like this:

for (pixel in image) {
  if (pixel.alpha != 0.0) {
    fill solid black
  }
}
J0e3gan
  • 8,740
  • 10
  • 53
  • 80
Halpo
  • 2,982
  • 3
  • 25
  • 54

2 Answers2

2

The following should do what you're after. Majority of the code is from How to set the opacity/alpha of a UIImage? I only added a test for the alpha value, before converting the colour of the pixel to black.

// Create a pixel buffer in an easy to use format
CGImageRef imageRef = [[UIImage imageNamed:@"testImage"] CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

UInt8 * m_PixelBuf = malloc(sizeof(UInt8) * height * width * 4);

NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(m_PixelBuf, width, height,
                                             bitsPerComponent, bytesPerRow, colorSpace,
                                             kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);

//alter the alpha when the alpha of the source != 0
int length = height * width * 4;
for (int i=0; i<length; i+=4) {
  if (m_PixelBuf[i+3] != 0) {
    m_PixelBuf[i+3] = 255;
  }
}

//create a new image
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf, width, height,
                                         bitsPerComponent, bytesPerRow, colorSpace,
                                         kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

CGImageRef newImgRef = CGBitmapContextCreateImage(ctx);
CGColorSpaceRelease(colorSpace);
CGContextRelease(ctx);
free(m_PixelBuf);

UIImage *finalImage = [UIImage imageWithCGImage:newImgRef];
CGImageRelease(newImgRef);

finalImage will now contain an image where all pixels that don't have an alpha of 0.0 have alpha of 1.

Community
  • 1
  • 1
rickerbh
  • 9,731
  • 1
  • 31
  • 35
  • nice, just needed a small adjustment to make the pixels black: `for (int i=0; i – Halpo Nov 03 '14 at 07:36
  • With all due deference, I do not think that this technique, of tweaking the pixel buffer of the original bitmap, is the correct approach. This assumes that the alpha channel reflects simply the opacity of the line path. But your rendered path may feature anti-aliased edges; this technique will unnecessarily introduce jaggies in the mask. I would advise against editing the pixel buffer like this. Instead, you want to re-render the original paths of your model using alpha of 1.0. That way your mask will enjoy the anti-aliased nature of the edges, while making the main stroke path fully opaque. – Rob Nov 03 '14 at 14:57
1

The underlying model for this app should not be images. This is not a question of "how do I create one rendition of the image from the other."

Instead, the underlying object model should be an array of paths. Then, when you want to create the image with translucent paths vs opaque paths, it's just a question of how you render this array of paths. Once you tackle it that way, the problem is not a complex image manipulation question but a simple rendering question.

By the way, I really like this array-of-paths model, because then it becomes quite trivial to do things like "gee, let me provide an undo function, letting the user remove one stroke at a time." It opens you up to all sorts of nice functional enhancements.

In terms of specifics of how to render these paths, it can be implemented in a variety of different ways. You could use custom drawRect function for UIView subclass that renders the paths with the appropriate alpha. Or you can do it with CAShapeLayer objects, too. Or you can do some hybrid (creating new image snapshots as you finish adding each path, saving you from having to re-render all of the paths each time). There are tons of ways of tackling this.

But the key insight is to employ an underlying model of an array of paths, and then the rendering of your two types of images becomes fairly trivial exercise:

paths mask

The first image is a rendering of a bunch of paths as CAShapeLayer objects with alpha of 0.5. The second is the same rendering, but with an alpha of 1.0. Again, it doesn't matter if you use shape layers or low level Core Graphics calls, but the underlying idea is the same. Either render your paths with translucency or not.

Rob
  • 415,655
  • 72
  • 787
  • 1,044
  • my app uses paths (with undo ability etc), this is just for a final rendering solution – Halpo Nov 03 '14 at 07:38
  • Then I don't understand the question. Render your paths twice to get the two images, once with alpha less than one, and then again with alpha equal to one. That yields the two images you need. – Rob Nov 03 '14 at 13:24