34

I have a UIImage which is loaded from a CIImage with:

tempImage = [UIImage imageWithCIImage:ciImage];

The problem is I need to crop tempImage to a specific CGRect and the only way I know how to do this is by using CGImage. The problem is that in the iOS 6.0 documentation I found this:

CGImage
If the UIImage object was initialized using a CIImage object, the value of the property is NULL.

A. How to convert from CIImage to CGImage? I'm using this code but I have a memory leak (and can't understand where):

+(UIImage*)UIImageFromCIImage:(CIImage*)ciImage {  
    CGSize size = ciImage.extent.size;  
    UIGraphicsBeginImageContext(size);  
    CGRect rect;  
    rect.origin = CGPointZero;  
    rect.size   = size;  
    UIImage *remImage = [UIImage imageWithCIImage:ciImage];  
    [remImage drawInRect:rect];  
    UIImage *result = UIGraphicsGetImageFromCurrentImageContext();  
    UIGraphicsEndImageContext();  
    remImage = nil;  
    ciImage = nil;  
    //
    return result;  
}
tagyro
  • 1,638
  • 2
  • 21
  • 36
  • You mention that you need the CGImage to do the crop. As Joris Kluivers said, you can do the crop without the CGImage by using the CICrop filter on the CIImage. Is there anything else you need the CGImage for? If so, what? – Peter Hosey Jan 18 '13 at 17:36
  • Also, regarding the memory leak, did you try using Instruments's Leaks template? Between the Leaks instrument and the Allocations instrument's Heapshot tool, you should be able to nail down where in your app you were leaking or accumulating memory. – Peter Hosey Jan 18 '13 at 17:52
  • @PeterHosey I did and I found that for some reason I have over 200 live instances of CIImage and over 100 of CGImage, all originating from this method. I just don't see where – tagyro Jan 18 '13 at 17:58

3 Answers3

46

Swift 3, Swift 4 and Swift 5

Here is a nice little function to convert a CIImage to CGImage in Swift.

func convertCIImageToCGImage(inputImage: CIImage) -> CGImage? {
    let context = CIContext(options: nil)
    if let cgImage = context.createCGImage(inputImage, from: inputImage.extent) {
        return cgImage
    }
    return nil
}

On a desktop or TV, you would typically use:

let ctx = CIContext(options: [.useSoftwareRenderer: false])
let cgImage = ctx.createCGImage(output, from: output.extent)

Many other alternative hint options are available in the core image processor, such as .allowLowPower, .cacheIntermediates, .highQualityDownsample, priorities and so on.

Notes:

  • CIContext(options: nil) will use a software renderer and can be quite slow. To improve the performance, use CIContext(options: [CIContextOption.useSoftwareRenderer: false]) - this forces operations to run on GPU, and can be much faster.
  • If you use CIContext more than once, cache it as apple recommends.
Fattie
  • 27,874
  • 70
  • 431
  • 719
skymook
  • 3,401
  • 35
  • 39
26

See the CIContext documentation for createCGImage:fromRect:

CGImageRef img = [myContext createCGImage:ciImage fromRect:[ciImage extent]];

From an answer to a similar question: https://stackoverflow.com/a/10472842/474896

Also since you have a CIImage to begin with, you could use CIFilter to actually crop your image.

Community
  • 1
  • 1
Joris Kluivers
  • 11,894
  • 2
  • 48
  • 47
  • 2
    Could and should: Core Image is *extremely* lazy, in that `createCGImage:fromRect:` (or any other method that requires finished pixels) is the point at which all the work actually gets done; no actual filtering has taken place before then. Cropping with a Core Image filter will actually save quite a bit of work (proportional to however much you crop out), since then you won't be asking for the cropped-out pixels, so they won't be rendered at all. (Of course, the other way would be to pass the crop rectangle for the `fromRect:`, which will have the same effect.) – Peter Hosey Jan 18 '13 at 17:12
  • @Joris Kluivers thanks for the answer but I get the same result: CIContext *context = [CIContext new]; CGImageRef ref = [context createCGImage:ciImage fromRect:[ciImage extent]]; tempImage = [UIImage imageWithCGImage:ref]; CGImageRelease(ref); NSLog(@"tempImage: %f %f",tempImage.size.width,tempImage.size.height); outputs: tempImage: 0.000000 0.000000 – tagyro Jan 18 '13 at 17:22
  • @PeterHosey You're right but unfortunately I don't have the crop info when I'm doing the conversion and I need to do the conversion before because I use the CGImage. Thanks – tagyro Jan 18 '13 at 17:27
  • @AndreiStoleru: You might try using `contextWithOptions:` instead of `new` to create the context. Also, what do you get in your log if you log the value of `[ciImage extent]`? – Peter Hosey Jan 18 '13 at 17:34
  • @PeterHosey NSLog(@"extent: %f",ciImage.extent.size.width); DEBUG -[CamViewController captureOutput:didOutputSampleBuffer:fromConnection:]:180 - extent: 1280.000000 Also tried with `contextWithOptions:` but same result. – tagyro Jan 18 '13 at 17:48
  • Here is how you create your context to get the CGImage: CIContext *myContext = [CIContext contextWithOptions:nil]; CGImageRef imgRef = [myContext createCGImage:ciImage fromRect:[ciImage extent]]; UIImage *imagefromCGImage = [UIImage imageWithCGImage:imgRef]; CGImageRelease(imgRef); – Alex Zavatone Jun 03 '14 at 20:47
  • It had better NOT do this conversion in main thread, it costs a lot of time. – Itachi Dec 08 '16 at 03:46
-2

After some googling I found this method which converts a CMSampleBufferRef to a CGImage:

+ (CGImageRef)imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer,0);        // Lock the image buffer

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);   // Get information of the image
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);
    CGContextRelease(newContext);

    CGColorSpaceRelease(colorSpace);
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);
    /* CVBufferRelease(imageBuffer); */  // do not call this!

    return newImage;
}

(but I closed the tab so I don't know where I got it from)

tagyro
  • 1,638
  • 2
  • 21
  • 36
  • 1
    This doesn't involve a CIImage at all. So, really you were intending to create a CGImage from a CMSampleBuffer all along, and the CIImage was just the means you had in mind to do that? – Peter Hosey Jan 18 '13 at 17:52
  • @PeterHosey As you could probably deduct I'm getting the sampleBuffer from `AVCaptureOutput` and I'm using the CIImage for face detection. The final goal is to crop just the face from the captured image and because I'm too stupid to understand CIImage and CGImage I searched for another solution: CMSampleBuffer ps. I accepted the answer from Joris because that's the right answer to my question. – tagyro Jan 18 '13 at 17:59
  • Ah, face detection. That's legitimate. Have you looked at AVCaptureMetadataOutput and AVMetadataFaceObject yet? – Peter Hosey Jan 18 '13 at 18:11
  • @PeterHosey Thanks for the suggestion, seems it's faster than `CIDetector` ;) – tagyro Jan 19 '13 at 09:08