I am working on an application that allows the user to select an image (camera or gallery), then draw on that image with their finger. The area that they draw becomes the transparent portion of a mask. A second image is then drawn below the first image.
I've been working on improving performance, especially on the iPad 3 and I seem to have hit a wall. I am new to iOS and Objective-C.
So my question is: What can I do to improve the drawing performance of my application?
I used this tutorial as a starting point for my code.
First I draw to a cache context:
- (void) drawToCache {
CGRect dirtyPoint1;
CGRect dirtyPoint2;
UIColor *color;
if (point1.x > -1){
hasDrawn = YES;
maskContext = CGLayerGetContext(maskLayer);
blackBackgroundContext = CGLayerGetContext(blackBackgroundLayer);
if (!doUndo){
color = [UIColor whiteColor];
CGContextSetBlendMode(maskContext, kCGBlendModeColor);
}
else{
color = [UIColor clearColor];
CGContextSetBlendMode(maskContext, kCGBlendModeClear);
}
CGContextSetShouldAntialias(maskContext, YES);
CGContextSetRGBFillColor(blackBackgroundContext, 0.0, 0.0, 0.0, 1.0);
CGContextFillRect(blackBackgroundContext, self.bounds);
CGContextSetStrokeColorWithColor(maskContext, [color CGColor]);
CGContextSetLineCap(maskContext, kCGLineCapRound);
CGContextSetLineWidth(maskContext, brushSize);
double x0 = (point0.x > -1) ? point0.x : point1.x; //after 4 touches we should have a back anchor point, if not, use the current anchor point
double y0 = (point0.y > -1) ? point0.y : point1.y; //after 4 touches we should have a back anchor point, if not, use the current anchor point
double x1 = point1.x;
double y1 = point1.y;
double x2 = point2.x;
double y2 = point2.y;
double x3 = point3.x;
double y3 = point3.y;
double xc1 = (x0 + x1) / 2.0;
double yc1 = (y0 + y1) / 2.0;
double xc2 = (x1 + x2) / 2.0;
double yc2 = (y1 + y2) / 2.0;
double xc3 = (x2 + x3) / 2.0;
double yc3 = (y2 + y3) / 2.0;
double len1 = sqrt((x1-x0) * (x1-x0) + (y1-y0) * (y1-y0));
double len2 = sqrt((x2-x1) * (x2-x1) + (y2-y1) * (y2-y1));
double len3 = sqrt((x3-x2) * (x3-x2) + (y3-y2) * (y3-y2));
double k1 = len1 / (len1 + len2);
double k2 = len2 / (len2 + len3);
double xm1 = xc1 + (xc2 - xc1) * k1;
double ym1 = yc1 + (yc2 - yc1) * k1;
double xm2 = xc2 + (xc3 - xc2) * k2;
double ym2 = yc2 + (yc3 - yc2) * k2;
double smooth_value = 0.5;
float ctrl1_x = xm1 + (xc2 - xm1) * smooth_value + x1 - xm1;
float ctrl1_y = ym1 + (yc2 - ym1) * smooth_value + y1 - ym1;
float ctrl2_x = xm2 + (xc2 - xm2) * smooth_value + x2 - xm2;
float ctrl2_y = ym2 + (yc2 - ym2) * smooth_value + y2 - ym2;
CGContextMoveToPoint(maskContext, point1.x, point1.y);
CGContextAddCurveToPoint(maskContext, ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y, point2.x, point2.y);
CGContextStrokePath(maskContext);
dirtyPoint1 = CGRectMake(point1.x-(brushSize/2), point1.y-(brushSize/2), brushSize, brushSize);
dirtyPoint2 = CGRectMake(point2.x-(brushSize/2), point2.y-(brushSize/2), brushSize, brushSize);
[self setNeedsDisplayInRect:CGRectUnion(dirtyPoint1, dirtyPoint2)];
}
}
And my drawRect:
- (void)drawRect:(CGRect)rect {
CGRect imageRect = CGRectMake(0, 0, 1024, 668);
CGSize size = CGSizeMake(1024, 668);
//Draw the user touches to the context, this creates a black and white image to convert into a mask
UIGraphicsBeginImageContext(size);
CGContextSetShouldAntialias(UIGraphicsGetCurrentContext(), YES);
CGContextDrawLayerInRect(UIGraphicsGetCurrentContext(), imageRect, blackBackgroundLayer);
CGContextDrawLayerInRect(UIGraphicsGetCurrentContext(), imageRect, maskLayer);
CGImageRef maskRef = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());
UIGraphicsEndImageContext();
//Make the mask
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef), CGImageGetBitsPerComponent(maskRef), CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), NULL, false);
//Mask the user image
CGImageRef masked = CGImageCreateWithMask([self.original CGImage], mask);
UIGraphicsBeginImageContext(size);
CGContextDrawLayerInRect(UIGraphicsGetCurrentContext(), imageRect, backgroundTextureLayer);
CGImageRef background = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());
UIGraphicsEndImageContext();
//Flip the context so everything is right side up
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0, 668);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
//Draw the background on the context
CGContextDrawImage(UIGraphicsGetCurrentContext(), imageRect, background);
//Draws the mask on the context
CGContextDrawImage(UIGraphicsGetCurrentContext(), imageRect, masked);
//Release everything to prevent memory leaks
CGImageRelease(background);
CGImageRelease(maskRef);
CGImageRelease(mask);
CGImageRelease(masked);
}
I spent yesterday researching these questions:
How can I improve CGContextFillRect and CGContextDrawImage performance
CGContextDrawImage very slow on iPhone 4
Drawing in CATiledLayer with CoreGraphics CGContextDrawImage
iPhone: How do I add layers to UIView's root layer to display an image with the content property?
iPhone CGContextDrawImage and UIImageJPEGRepresentation drastically slowing down application
Any help is appreciated!
Thanks, Kevin