I'm trying to let a user pan/zoom through a static image with a selection rectangle on the main image, and a separate UIView for the "magnified" image.
The "magnified" UIView implements drawRect:
// rotate selectionRect if image isn't portrait internally
CGRect tmpRect = selectionRect;
if ((image.imageOrientation == UIImageOrientationLeft || image.imageOrientation == UIImageOrientationLeftMirrored || image.imageOrientation == UIImageOrientationRight || image.imageOrientation == UIImageOrientationRightMirrored)) {
tmpRect = CGRectMake(selectionRect.origin.y,image.size.width - selectionRect.origin.x - selectionRect.size.width,selectionRect.size.height,selectionRect.size.width);
}
// crop and draw
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], tmpRect);
[[UIImage imageWithCGImage:imageRef scale:image.scale orientation:image.imageOrientation] drawInRect:rect];
CGImageRelease(imageRef);
The performance on this is atrocious. It spends 92% of its time in [UIImage drawInRect].
Deeper, that's 84.5% in ripc_AcquireImage and 7.5% in ripc_RenderImage.
ripc_AcquireImage is 51% decoding the jpg, 30% upsampling.
So...I guess my question is what's the best way to avoid this? One option is to not take in a jpg to start, and that is a real solution for some things [ala captureStillImageAsynchronouslyFromConnection without JPG intermediary ]. But if I'm getting a UIImage off the camera roll, say...is there a clean way to convert the UIImage-jpg? Is converting it even the right thing (is there a "cacheAsBitmap" flag somewhere that would do that, essentially?)