Many before me have shared their knowledge in stack overflow about this topic. I was able to take over much of the tips and code snippets thanks to the contribution. It all worked quite good except that it was often hard on the working memory. This time-lapse application that I am working on, was able to generate a movie out of 2000 hd images and more, but since iOS 7.1 it is having trouble generating a video out of more than 240 hd images. 240 images seems to be the limit on an iPhone 5s. I was wondering whether anybody has had these problems too and whether anybody has found solutions to it. Now to the source code.
This part iterates through saved uiimages in the apps document's directory.
if ([adaptor.assetWriterInput isReadyForMoreMediaData])
{
CMTime frameTime = CMTimeMake(1, fps);
CMTime lastTime=CMTimeMake(i, fps);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
NSString *imageFilePath = [NSString stringWithFormat:@"%@/%@",folderPathName, imageFileName];
image = [UIImage imageWithContentsOfFile:imageFilePath] ;
cgimage = [image CGImage];
buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage: cgimage];
bool result = [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
if (result == NO)
{
NSLog(@"failed to append buffer %i", i);
_videoStatus = 0;
success = NO;
return success;
}
//buffer has to be released here or memory pressure will occur
if(buffer != NULL)
{
CVBufferRelease(buffer);
buffer = NULL;
}
}
This is the local method which appears to make most trouble. It gets the pixel buffer reference from cgimage.
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (CFDictionaryRef) CFBridgingRetain(options),
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace, (CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(M_PI));
float width = CGImageGetWidth(image);
float height = CGImageGetHeight(image);
CGContextDrawImage(context, CGRectMake(0, 0, width,
height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I have been spending a lot of time on this and not moving forward. Help is much appreciated. If any more details are necessary, I am glad to provide.