I have built a camera using AVFoundation
.
Once my AVCaptureStillImageOutput
has completed its captureStillImageAsynchronouslyFromConnection:completionHandler:
method, I create a NSData object like this:
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
Once I have the NSData
object, I would like to rotate the image -without- converting to a UIImage
. I have found out that I can convert to a CGImage
to do so.
After I have the imageData, I start the process of converting to CGImage, but I have found that the CGImageRef
ends up being THIRTY times larger than the NSData
object.
Here is the code I use to convert to CGImage
from NSData
:
CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)(imageData));
CGImageRef imageRef = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
If I try to NSLog
out the size of the image, it comes to 30 megabytes when the NSData
was a 1.5-2 megabyte image!
size_t imageSize = CGImageGetBytesPerRow(imageRef) * CGImageGetHeight(imageRef);
NSLog(@"cgimage size = %zu",imageSize);
I thought that maybe when you go from NSData to CGImage, the image decompresses, and then maybe if I converted back to NSData, that it might go back to the right file size.
imageData = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
The above NSData
has the same length
as the CGImageRef
object.
If I try to save the image, the image is a 30mb image that cannot be opened.
I am totally new to using CGImage, so I am not sure if I am converting from NSData to CGImage and back incorrectly, or if I need to call some method to decompress again.
Thanks in advance,
Will