3

I've got two methods I'm working with, and they aren't playing nicely. The first is a Perlin noise generator, which exports a black-and-white UIImage of random clouds, and it's working perfectly. The second method takes a UIImage and filters out all pixels above or below a given brightness, returning an image with transparency where the unwanted pixels were, and it's working perfectly with the black-and-white test images I've been using.

But when I try to feed an image from the first method into the second, it doesn't work. Every pixel gets removed, regardless of the input values, and I get back a blank UIImage. (To be clear, that's a non-nil UIImage with nothing but transparent pixels, as though every pixel is being counted as outside the desired brightness range, regardless of that pixel's actual brightness.)

Below are the two methods. I adapted each from tutorials and SO answers, but while I'm not 100% comfortable with Core Graphics, they seem reasonably simple to me: the first iterates through each pixel and colors it with RGB values from a Perlin formula, and the second creates a mask based on input values. (Note: both are category methods on UIImage, so the "self" references in the latter method are referring to the source image.)

+ (UIImage *)perlinMapOfSize:(CGSize)size {
    CZGPerlinGenerator *generator = [[CZGPerlinGenerator alloc] init];
    generator.octaves = 10;
    generator.persistence = 0.5;
    generator.zoom = 150;

    CGContextRef ctx = [self contextSetup:size];

    CGContextSetRGBFillColor(ctx, 0.000, 0.000, 0.000, 1.000);
    CGContextFillRect(ctx, CGRectMake(0.0, 0.0, size.width, size.height));
    for (CGFloat x = 0.0; x<size.width; x+=1.0) {
        for (CGFloat y=0.0; y<size.height; y+=1.0) {
            double value = [generator perlinNoiseX:x y:y z:0 t:0];
            CGContextSetRGBFillColor(ctx, value, value, value, 1.0);
            CGContextFillRect(ctx, CGRectMake(x, y, 1.0, 1.0));
        }
    }

    return [self finishImageContext];
}

-(UIImage*)imageWithLumaMaskFromDark:(CGFloat)lumaFloor toLight:(CGFloat)lumaCeil {
    // inputs range from 0 - 255
    CGImageRef rawImageRef = self.CGImage;

    const CGFloat colorMasking[6] = {lumaFloor, lumaCeil, lumaFloor, lumaCeil, lumaFloor, lumaCeil};

    UIGraphicsBeginImageContext(self.size);
    CGImageRef maskedImageRef = CGImageCreateWithMaskingColors(rawImageRef, colorMasking);
    {
        //if in iphone
        CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0.0, self.size.height);
        CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
    }

    CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, self.size.width, self.size.height), maskedImageRef);
    UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
    CGImageRelease(maskedImageRef);
    UIGraphicsEndImageContext();
    return result;
}

Does anyone know why an image from the former method would be incompatible with the latter? The former method is successfully returning cloud images, and the latter method is working with every image I feed into it from my computer or the internet, just not the images from the former method.

I'm assuming that the CGImageCreateWithMaskingColors() call in the second method is looking for some information that the first method isn't putting into the image, or something, I just don't know the system well enough to figure out what's wrong.

Can anyone shed some light?

EDIT: As requested, here are the two other methods referenced above. It's an odd setup, I know, to use class methods like that in a category, but it's how I found the code in a tutorial and it works so I never bothered to change it.

+ (CGContextRef) contextSetup: (CGSize) size {
    UIGraphicsBeginImageContext(size);
    CGContextRef context = UIGraphicsGetCurrentContext();
    UIGraphicsPushContext(context);
    //NSLog(@"Begin drawing");
    return context;
}

+ (UIImage *) finishImageContext {
    //NSLog(@"End drawing");
    UIGraphicsPopContext();
    UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return outputImage;
}

EDIT 2: Based on some research that the CGImageCreateWithMaskingColors() function doesn't work with images that include alpha components, I've rearranged the first method like so. My gut tells me this was the problem, but I'm kind of casting about in the dark. This is my attempt at trying to create an image with kCGImageAlphaNone, but now UIGraphicsGetImageFromCurrentImageContext() at the end is returning nil.

+ (UIImage *)perlinMapOfSize:(CGSize)size {
    CZGPerlinGenerator *generator = [[CZGPerlinGenerator alloc] init];
    generator.octaves = 10;
    generator.persistence = 0.5;
    generator.zoom = 150;

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef ctx = CGBitmapContextCreate(NULL, size.width, size.height, 8, size.width * 4, colorSpace, kCGImageAlphaNoneSkipLast);
    UIGraphicsPushContext(ctx);

    CGContextSetRGBFillColor(ctx, 0.000, 0.000, 0.000, 1.0);
    CGContextFillRect(ctx, CGRectMake(0.0, 0.0, size.width, size.height));
    for (int x = 0; x<size.width; x++) {
        for (int y=0; y<size.height; y++) {
            double value = [generator perlinNoiseX:x y:y z:0 t:0];
            CGContextSetRGBFillColor(ctx, value, value, value, 1.0);
            CGContextFillRect(ctx, CGRectMake(x, y, 1.0, 1.0));
        }
    }

    UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
    NSLog(@"Output: %@", outputImage);
    UIGraphicsEndImageContext();
    CGContextRelease(ctx);
    return outputImage;
}
Nerrolken
  • 1,975
  • 3
  • 24
  • 53
  • May we see `contextSetup` and `finishImageContext` please? I don't understand how they can possibly work; this is a _class_ method, so no state can be maintained — so how is the information being passed around? To put it another way, what is `self` supposed to be? Where does the UIImage ultimately come from? – matt Jun 08 '16 at 01:58
  • @matt I added a comment with those two methods. It's an odd setup, I know, to use class methods like that in a category, but it's how I found the code in a tutorial and it works so I never bothered to change it. Do you think that's related to my problem? – Nerrolken Jun 08 '16 at 06:16
  • Any reasons for the downvotes? Suggestions of how I can improve my question would be appreciated, rather than just downvoting silently... – Nerrolken Jun 08 '16 at 06:24
  • Have you done a search on SO? There are some strange behaviors of CGImageCreateWithMaskingColors reported. It may be that your problem is a related artifact. See, for example, http://stackoverflow.com/a/19110853/341994 – matt Jun 08 '16 at 17:26
  • @matt I've done a lot of research around the first method because I assumed the problem was there, but I just found a note that images used with `CGImageCreateWithMaskingColors` "cannot have an alpha component." So I've rearranged the method to try to create an image with no alpha component (and to remove those weird class method calls while I'm at it), but now I'm getting a nil image returned. (Still a n00b, I guess.) I added the code above, any chance you can spot where I'm going wrong? – Nerrolken Jun 08 '16 at 20:40
  • If you are going to call `UIGraphicsGetImageFromCurrentImageContext`, why are you starting with `CGBitmapContextCreate`? You should be starting with `UIGraphicsBeginImageContextWithOptions`, should you not? You are getting no image from your image context because at the moment you do not _have_ any image context — you have a bitmap context. – matt Jun 08 '16 at 21:12
  • @matt Again, I'm new to all this. It seemed like a normal image context would have alpha values by default, while the bitmap context (which I took to be a type of image context) had that `kCGImageAlphaNoneSkipLast` option. The images used with CGImageCreateWithMaskingColors "cannot have an alpha component," so my current theory is that that's why the masking method isn't receiving it properly: because the images I'm creating here have an alpha component. Is there a way to remove the alpha component from the image context, or else from the result image after the context is closed? – Nerrolken Jun 08 '16 at 23:58
  • 1
    (1) `UIGraphicsBeginImageContextWithOptions` has an option (hence the name) to make an opaque image. Or (2) if you use a bitmap context, then you would use `CGBitmapContextCreateImage` to obtain the image as a CGImage, i.e. a bitmap. I am not telling you which to use, but I am telling you (again) that you cannot use `UIGraphicsGetImageFromCurrentImageContext` to get an image from a bitmap context, because it is _not_ an image context — it's a bitmap context. – matt Jun 09 '16 at 00:32
  • @matt You could probably summarise the above comments in an answer – using an opaque image context for the perlin map image generation appears to fix OP's problem. Although I would personally use a bitmap context for this kind of pixel specific operation to write the RGB values into the bitmap directly, rather than using `CGContextFillRect`. – Hamish Jun 12 '16 at 09:14

1 Answers1

3

As you've discovered, CGImageCreateWithMaskingColors requires an image without an alpha channel. However your attempted fix doesn't work as @matt points out because you're trying to mix and match image context function calls (e.g UIGraphicsGetImageFromCurrentImageContext) with a bitmap context.

The simplest fix therefore is to simply carry on using an image context, but set it to be opaque. You can do this by calling UIGraphicsBeginImageContextWithOptions and supplying YES into the opaque argument, which will output an image without an alpha channel.

Although that being said, using a bitmap context here would be a more appropriate solution as it allows you to manipulate the pixel data directly, rather than doing a ton of CGContextFillRect calls.

Something like this should achieve the desired result:

+(UIImage *)perlinMapOfSize:(CGSize)size {

    // your generator setup
    CZGPerlinGenerator *generator = [[CZGPerlinGenerator alloc] init];
    generator.octaves = 10;
    generator.persistence = 0.5;
    generator.zoom = 150;

    // bitmap info
    size_t width = size.width;
    size_t height = size.height;
    size_t bytesPerPixel = 4;
    size_t bitsPerComponent = 8;
    size_t bytesPerRow = width * bytesPerPixel;
    CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big;

    // allocate memory for the bitmap
    UInt8* imgData = calloc(bytesPerRow, height);

    // create RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // create an RGBA bitmap context where the alpha component is ignored
    CGContextRef ctx = CGBitmapContextCreate(imgData, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);

    // iterate over pixels
    for (int x = 0; x < width; x++) {
        for (int y = 0; y < height; y++) {

            // the current pixel index
            size_t byte = x * bytesPerPixel + y * bytesPerRow;

            // get noise data for given x & y
            int value = round([generator perlinNoiseX:x y:y z:0 t:0]*255.0);

            // limit values (not too sure of the range of values that the method outputs – this may not be needed)
            if (value > 255) value = 255;
            else if (value < 0) value = 0;

            // write values to pixel components
            imgData[byte] = value; // R
            imgData[byte+1] = value; // G
            imgData[byte+2] = value; // B
        }
    }

    // get image
    CGImageRef imgRef = CGBitmapContextCreateImage(ctx);
    UIImage* img = [UIImage imageWithCGImage:imgRef];

    // clean up
    CGContextRelease(ctx);
    CGColorSpaceRelease(colorSpace);
    CGImageRelease(imgRef);
    free(imgData);

    return img;
}

A few other things to note

UIGraphicsBeginImageContext(WithOptions) automatically makes the image context the current context – thus you don't need to do UIGraphicsPushContext/UIGraphicsPopContext with it.

UIGraphicsBeginImageContext uses a scale of 1.0 – meaning you're working with sizes in pixels, not points. Therefore the images you ouput may not be suitable for 2x or 3x displays. You should usually be using UIGraphicsBeginImageContextWithOptions instead, with a scale of 0.0 (the main screen scale) or image.scale if you're just manipulating a given image (appropriate for your imageWithLumaMaskFromDark method).

CGBitmapContextCreate will also create a context with a scale of 1.0. If you want to scale the image to the same scale as your screen, you'll want to simply multiply the width and height that you input by the screen scale:

CGFloat scale = [UIScreen mainScreen].scale;

size_t width = size.width*scale;
size_t height = size.height*scale;

and then supply the scale when you create the output UIImage:

UIImage* img = [UIImage imageWithCGImage:imgRef scale:scale orientation:UIImageOrientationUp];

If you want to do some CGContext drawing calls in the bitmap context, you'll also want to scale it before drawing, so you can work in a points coordinate system:

CGContextScaleCTM(ctx, scale, scale);
Community
  • 1
  • 1
Hamish
  • 78,605
  • 19
  • 187
  • 280