I've got two methods I'm working with, and they aren't playing nicely. The first is a Perlin noise generator, which exports a black-and-white UIImage of random clouds, and it's working perfectly. The second method takes a UIImage and filters out all pixels above or below a given brightness, returning an image with transparency where the unwanted pixels were, and it's working perfectly with the black-and-white test images I've been using.
But when I try to feed an image from the first method into the second, it doesn't work. Every pixel gets removed, regardless of the input values, and I get back a blank UIImage. (To be clear, that's a non-nil UIImage with nothing but transparent pixels, as though every pixel is being counted as outside the desired brightness range, regardless of that pixel's actual brightness.)
Below are the two methods. I adapted each from tutorials and SO answers, but while I'm not 100% comfortable with Core Graphics, they seem reasonably simple to me: the first iterates through each pixel and colors it with RGB values from a Perlin formula, and the second creates a mask based on input values. (Note: both are category methods on UIImage, so the "self" references in the latter method are referring to the source image.)
+ (UIImage *)perlinMapOfSize:(CGSize)size {
CZGPerlinGenerator *generator = [[CZGPerlinGenerator alloc] init];
generator.octaves = 10;
generator.persistence = 0.5;
generator.zoom = 150;
CGContextRef ctx = [self contextSetup:size];
CGContextSetRGBFillColor(ctx, 0.000, 0.000, 0.000, 1.000);
CGContextFillRect(ctx, CGRectMake(0.0, 0.0, size.width, size.height));
for (CGFloat x = 0.0; x<size.width; x+=1.0) {
for (CGFloat y=0.0; y<size.height; y+=1.0) {
double value = [generator perlinNoiseX:x y:y z:0 t:0];
CGContextSetRGBFillColor(ctx, value, value, value, 1.0);
CGContextFillRect(ctx, CGRectMake(x, y, 1.0, 1.0));
}
}
return [self finishImageContext];
}
-(UIImage*)imageWithLumaMaskFromDark:(CGFloat)lumaFloor toLight:(CGFloat)lumaCeil {
// inputs range from 0 - 255
CGImageRef rawImageRef = self.CGImage;
const CGFloat colorMasking[6] = {lumaFloor, lumaCeil, lumaFloor, lumaCeil, lumaFloor, lumaCeil};
UIGraphicsBeginImageContext(self.size);
CGImageRef maskedImageRef = CGImageCreateWithMaskingColors(rawImageRef, colorMasking);
{
//if in iphone
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0.0, self.size.height);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
}
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, self.size.width, self.size.height), maskedImageRef);
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
CGImageRelease(maskedImageRef);
UIGraphicsEndImageContext();
return result;
}
Does anyone know why an image from the former method would be incompatible with the latter? The former method is successfully returning cloud images, and the latter method is working with every image I feed into it from my computer or the internet, just not the images from the former method.
I'm assuming that the CGImageCreateWithMaskingColors()
call in the second method is looking for some information that the first method isn't putting into the image, or something, I just don't know the system well enough to figure out what's wrong.
Can anyone shed some light?
EDIT: As requested, here are the two other methods referenced above. It's an odd setup, I know, to use class methods like that in a category, but it's how I found the code in a tutorial and it works so I never bothered to change it.
+ (CGContextRef) contextSetup: (CGSize) size {
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(context);
//NSLog(@"Begin drawing");
return context;
}
+ (UIImage *) finishImageContext {
//NSLog(@"End drawing");
UIGraphicsPopContext();
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
EDIT 2: Based on some research that the CGImageCreateWithMaskingColors()
function doesn't work with images that include alpha components, I've rearranged the first method like so. My gut tells me this was the problem, but I'm kind of casting about in the dark. This is my attempt at trying to create an image with kCGImageAlphaNone
, but now UIGraphicsGetImageFromCurrentImageContext()
at the end is returning nil.
+ (UIImage *)perlinMapOfSize:(CGSize)size {
CZGPerlinGenerator *generator = [[CZGPerlinGenerator alloc] init];
generator.octaves = 10;
generator.persistence = 0.5;
generator.zoom = 150;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(NULL, size.width, size.height, 8, size.width * 4, colorSpace, kCGImageAlphaNoneSkipLast);
UIGraphicsPushContext(ctx);
CGContextSetRGBFillColor(ctx, 0.000, 0.000, 0.000, 1.0);
CGContextFillRect(ctx, CGRectMake(0.0, 0.0, size.width, size.height));
for (int x = 0; x<size.width; x++) {
for (int y=0; y<size.height; y++) {
double value = [generator perlinNoiseX:x y:y z:0 t:0];
CGContextSetRGBFillColor(ctx, value, value, value, 1.0);
CGContextFillRect(ctx, CGRectMake(x, y, 1.0, 1.0));
}
}
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
NSLog(@"Output: %@", outputImage);
UIGraphicsEndImageContext();
CGContextRelease(ctx);
return outputImage;
}