27

I'm trying to fix a performance issue when creating GIFs with lots of frames. For example some GIFs could contain > 1200 frames. With my current code I run out of memory. I'm trying to figure out how to solve this; could this be done in batches? My first idea was if it was possible to append images together but I do not think there is a method for that or how GIFs are created by the ImageIO framework. It would be nice if there was a plural CGImageDestinationAddImages method but there isn't, so I'm lost on how to try to solve this. I appreciate any help offered. Sorry in advance for the lengthy code, but I felt it was necessary to show the step by step creation of the GIF.

It is acceptable that I can make a video file instead of a GIF as long as the differing GIF frame delays are possible in a video and recording doesn't take as long as the sum of all animations in each frame.

Note: jump to Latest Update heading below to skip the backstory.

Updates 1 - 6: Thread lock fixed by using GCD, but the memory issue still remains. 100% CPU utilization is not the concern here, as I show a UIActivityIndicatorView while the work is performed. Using the drawViewHierarchyInRect method might be more efficient/speedy than the renderInContext method, however I discovered you can't use the drawViewHierarchyInRect method on a background thread with the afterScreenUpdates property set to YES; it locks up the thread.

There must be some way of writing the GIF out in batches. I believe I've narrowed the memory problem down to: CGImageDestinationFinalize This method seems pretty inefficient for making images with lots of frames since everything has to be in memory to write out the entire image. I've confirmed this because I use little memory while grabbing the rendered containerView layer images and calling CGImageDestinationAddImage. The moment I call CGImageDestinationFinalize the memory meter spikes up instantly; sometimes up to 2GB based on the amount of frames. The amount of memory required just seems crazy to make a ~20-1000KB GIF.

Update 2: There is a method I found that might promise some hope. It is:

CGImageDestinationCopyImageSource(CGImageDestinationRef idst, 
CGImageSourceRef isrc, CFDictionaryRef options,CFErrorRef* err) 

My new idea is that for every 10 or some other arbitrary # of frames, I will write those to a destination, and then in the next loop, the prior completed destination with 10 frames will now be my new source. However there is a problem; reading the docs it states this:

Losslessly copies the contents of the image source, 'isrc', to the * destination, 'idst'. 
The image data will not be modified. No other images should be added to the image destination. 
* CGImageDestinationFinalize() should not be called afterward -
* the result is saved to the destination when this function returns.

This makes me think my idea won't work, but alas I tried. Continue to Update 3.

Update 3: I've been trying the CGImageDestinationCopyImageSource method with my updated code below, however I'm always getting back an image with only one frame; this is because of the documentation stated in Update 2 above most likely. There is yet one more method to perhaps try: CGImageSourceCreateIncremental But I doubt that is what I need.

It seems like I need some way of writing/appending the GIF frames to disk incrementally so I can purge each new chunk out of memory. Perhaps a CGImageDestinationCreateWithDataConsumer with the appropriate callbacks to save the data incrementally would be ideal?

Update 4: I started to try the CGImageDestinationCreateWithDataConsumer method to see if I could manage writing the bytes out as they come in using an NSFileHandle, but again the problem is that calling CGImageDestinationFinalize sends all of the bytes in one shot which is the same as before - I run out memory. I really need help to get this solved and will offer a large bounty.

Update 5: I've posted a large bounty. I would like to see some brilliant solutions without a 3rd party library or framework to append the raw NSData GIF bytes to each other and write it out incrementally to disk with an NSFileHandle - essentially creating the GIF manually. Or, if you think there is a solution to be found using ImageIO like what I've tried that would be amazing too. Swizzling, subclassing etc.

Update 6: I have been researching how GIFs are made at the lowest level, and I wrote a small test which is along the lines of what I'm going for with the bounty's help. I need to grab the rendered UIImage, get the bytes from it, compress it using LZW and append the bytes along with some other work like determining the global color table. Source of info: http://giflib.sourceforge.net/whatsinagif/bits_and_bytes.html .

Latest Update:

I've spent all week researching this from every angle to see what goes on exactly to build decent quality GIFs based on limitations (such as 256 colors max). I believe and assume what ImageIO is doing is creating a single bitmap context under the hood with all image frames merged as one, and is performing color quantization on this bitmap to generate a single global color table to be used in the GIF. Using a hex editor on some successful GIFs made from ImageIO confirms they have a global color table and never have a local one unless you set it for each frame yourself. Color quantization is performed on this huge bitmap to build a color palette (again assuming, but strongly believe).

I have this weird and crazy idea: The frame images from my app can only differ by one color per frame and even better yet, I know what small sets of colors my app uses. The first/background frame is a frame that contains colors that I cannot control (user supplied content such as photos) so what I'm thinking is I will snapshot this view, and then snapshot another view with that has the known colors my app deals with and make this a single bitmap context that I can pass into the normal ImaegIO GIF making routines. What's the advantage? Well, this gets it down from ~1200 frames to one by merging two images into a single image. ImageIO will then do its thing on the much smaller bitmap and write out a single GIF with one frame.

Now what can I do to build the actual 1200 frame GIF? I'm thinking I can take that single frame GIF and extract the color table bytes nicely, because they fall between two GIF protocol blocks. I will still need to build the GIF manually, but now I shouldn't have to compute the color palette. I will be stealing the palette ImageIO thought was best and using that for my byte buffer. I still need an LZW compressor implementation with the bounty's help, but that should be alot easier than color quantization which can be painfully slow. LZW can be slow too so I'm not sure if it's even worth it; no idea how LZW will perform sequentially with ~1200 frames.

What are your thoughts?

@property (nonatomic, strong) NSFileHandle *outputHandle;    

- (void)makeGIF
{
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0),^
    {
        NSString *filePath = @"/Users/Test/Desktop/Test.gif";

        [[NSFileManager defaultManager] createFileAtPath:filePath contents:nil attributes:nil];

        self.outputHandle = [NSFileHandle fileHandleForWritingAtPath:filePath];

        NSMutableData *openingData = [[NSMutableData alloc]init];

        // GIF89a header

        const uint8_t gif89aHeader [] = { 0x47, 0x49, 0x46, 0x38, 0x39, 0x61 };

        [openingData appendBytes:gif89aHeader length:sizeof(gif89aHeader)];


        const uint8_t screenDescriptor [] = { 0x0A, 0x00, 0x0A, 0x00, 0x91, 0x00, 0x00 };

        [openingData appendBytes:screenDescriptor length:sizeof(screenDescriptor)];


        // Global color table

        const uint8_t globalColorTable [] = { 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0xFF, 0x00, 0x00, 0x00 };

        [openingData appendBytes:globalColorTable length:sizeof(globalColorTable)];


        // 'Netscape 2.0' - Loop forever

        const uint8_t applicationExtension [] = { 0x21, 0xFF, 0x0B, 0x4E, 0x45, 0x54, 0x53, 0x43, 0x41, 0x50, 0x45, 0x32, 0x2E, 0x30, 0x03, 0x01, 0x00, 0x00, 0x00 };

        [openingData appendBytes:applicationExtension length:sizeof(applicationExtension)];

        [self.outputHandle writeData:openingData];

        for (NSUInteger i = 0; i < 1200; i++)
        {
            const uint8_t graphicsControl [] = { 0x21, 0xF9, 0x04, 0x04, 0x32, 0x00, 0x00, 0x00 };

            NSMutableData *imageData = [[NSMutableData alloc]init];

            [imageData appendBytes:graphicsControl length:sizeof(graphicsControl)];


            const uint8_t imageDescriptor [] = { 0x2C, 0x00, 0x00, 0x00, 0x00, 0x0A, 0x00, 0x0A, 0x00, 0x00 };

            [imageData appendBytes:imageDescriptor length:sizeof(imageDescriptor)];


            const uint8_t image [] = { 0x02, 0x16, 0x8C, 0x2D, 0x99, 0x87, 0x2A, 0x1C, 0xDC, 0x33, 0xA0, 0x02, 0x75, 0xEC, 0x95, 0xFA, 0xA8, 0xDE, 0x60, 0x8C, 0x04, 0x91, 0x4C, 0x01, 0x00 };

            [imageData appendBytes:image length:sizeof(image)];


            [self.outputHandle writeData:imageData];
        }


        NSMutableData *closingData = [[NSMutableData alloc]init];

        const uint8_t appSignature [] = { 0x21, 0xFE, 0x02, 0x48, 0x69, 0x00 };

        [closingData appendBytes:appSignature length:sizeof(appSignature)];


        const uint8_t trailer [] = { 0x3B };

        [closingData appendBytes:trailer length:sizeof(trailer)];


        [self.outputHandle writeData:closingData];

        [self.outputHandle closeFile];

        self.outputHandle = nil;

        dispatch_async(dispatch_get_main_queue(),^
        {
           // Get back to main thread and do something with the GIF
        });
    });
}

- (UIImage *)getImage
{
    // Read question's 'Update 1' to see why I'm not using the
    // drawViewHierarchyInRect method
    UIGraphicsBeginImageContextWithOptions(self.containerView.bounds.size, NO, 1.0);
    [self.containerView.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *snapShot = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    // Shaves exported gif size considerably
    NSData *data = UIImageJPEGRepresentation(snapShot, 1.0);

    return [UIImage imageWithData:data];
}
klcjr89
  • 5,862
  • 10
  • 58
  • 91
  • 3
    Keyword: @autoreleasepool - http://stackoverflow.com/search?q=[objective-c]%40autoreleasepool –  Apr 18 '15 at 21:39
  • Yep, but removing it doesn't help either. – klcjr89 Apr 18 '15 at 21:40
  • Find the correct locations to insert pools - and you can also nest them. –  Apr 18 '15 at 21:41
  • By removing the pools as a test, I would get the error ': ImageIO: CGImageDestinationAddImage image parameter is nil' when the method returned – klcjr89 Apr 18 '15 at 21:44
  • Better search link: http://stackoverflow.com/search?q=[objective-c]CGImage+%40autoreleasepool –  Apr 18 '15 at 21:46
  • @Zero, I did some research, and confirmed that the autoreleasepools should be in the right spot. I have updated my question with new discoveries I've found. – klcjr89 Apr 20 '15 at 04:06
  • 2
    Have you tried the following? For each frame: write the frame image out to an individual image file, create a `CGImageSource` from that file, add the image from that source to the destination, release the source. After you finalize the destination, delete the temporary image files. The idea is that the images are backed by files so they don't have to be in memory all the time. They can be purged and reloaded as necessary. – Ken Thomases Apr 20 '15 at 21:04
  • I agree with Ken Thomases. If you write an image to disk and load it with some APIs (UIImage does it but I'm not so sure about Core Graphics. CGImageSource sounds right) it will automatically be deleted from RAM in low memory situations and re-loaded from the disk later if you need the image again. – Abhi Beckert Apr 21 '15 at 21:10
  • This doesn't work. I've tried it. `CGImageDestinationFinalize` needs all of the images in RAM before it writes out to disk. – klcjr89 Apr 21 '15 at 21:11
  • See my prior revision here of the attempt (under Update 5) to do just that: http://stackoverflow.com/revisions/29723024/31 – klcjr89 Apr 21 '15 at 21:16
  • Hmm, I just spent an embarrassingly long time investigating how to do this efficiently with `ImageIO`. Unfortunately, I have not made any progress :( Have you considered using `AVFoundation` to capture a non-GIF video and using that instead (optionally, preparing GIF images on a server or something)? This is probably not what you are looking for so feel free to ignore/downvote this comment. In case you want to explore this alternative, I just cleaned up and publised a side project that does the same. Hope it helps https://github.com/chinmaygarde/CaptureKit – Buzzy Apr 24 '15 at 02:06
  • Thank you for your comment! Right now I'm considering just building the GIF byte per byte myself. – klcjr89 Apr 24 '15 at 02:14
  • @Buzzy I looked at your link, and is there anyway of recording the frames in non real time? By that I mean sped up so if the proposed GIF animation was to take 5 minutes, is there any way that it won't take that long to record it? I would also do this in a non visible view offscreen. – klcjr89 Apr 25 '15 at 01:40
  • 1
    I don't think you're going to have much luck with ImageIO. Have you tried [Giraffe](https://github.com/unixpickle/Giraffe)? It's an Objective-C wrapper around the [ANGif library](http://freecode.com/projects/angif), and it writes the output file incrementally (based on my inspection). – rob mayoff Apr 25 '15 at 03:27
  • @robmayoff I will look into this. It doesn't look like they compute a global color table however. – klcjr89 Apr 25 '15 at 03:42
  • You could also see if the compressor in [LICEcap](http://www.cockos.com/licecap/) is usable (but keep in mind it's GPL). Or check out [Gifsicle](https://github.com/kohler/gifsicle), which is also GPL, but it looks like the author might grant other licenses by request. – rob mayoff Apr 25 '15 at 03:49
  • @robmayoff I've found a wealth of color quantization and LZW source code online, but can't seem to manage implementing it. What do you think of my idea for letting ImageIO give me a color palette (under latest update heading in question) – klcjr89 Apr 25 '15 at 03:54
  • @aviatorken89 Yes, both those things are very easy to achieve. Modify the presentation time argument in the `appendPixelBuffer:withPresentationTime:` call to control the timing of the result relative to rate of capture. To capture offscreen view hierarchies, use `[view.layer renderInContext:...]` (on `CALayer`) instead of the `drawViewHierarchyInRect:afterScreenUpdates:` method on `UIView` – Buzzy Apr 25 '15 at 06:16
  • If creating a video file instead of an animated GIF is acceptable, you (aviatorken89) should say so clearly, because creating a 1200 frame video file is much easier than creating a 1200 frame animated GIF. – rob mayoff Apr 25 '15 at 06:43
  • @Buzzy can you provide an example? The sample project you linked to doesn't run in the simulator. – klcjr89 Apr 25 '15 at 15:00
  • @robmayoff I have no problems with a video if it the file will be small and can replicate different GIF delay times. Updated my question with this. – klcjr89 Apr 25 '15 at 18:53
  • The demo of the example from the README was actually captured on the simulator. So I am surprised you are not able to run the same. It is an iOS dynamic framework. Maybe it is not being embedded correctly or something? Not sure. In any case, @robmayoff s answer does pretty much what I described and he has done an excellent job annotating each step. You should use that :) – Buzzy Apr 25 '15 at 22:30
  • I am looking for a large Gif. I see you just create a gif with 1200 frames. I tried to add many custom images in gif but i don't know exactly where to add my images. Can you help me? – Gaby Fitcal Aug 26 '15 at 10:57
  • If you guys are interested in a solution that fully fixes these problems by writing decoded GIF89A frames to disk, see my blog post on that subject: http://www.modejong.com/blog/post6_new_animated_gif_decoder_for_avanimator – MoDJ Oct 09 '16 at 19:44

2 Answers2

12

If you set kCGImagePropertyGIFHasGlobalColorMap to NO then out of memory will not happen.

hsafarya
  • 1,043
  • 1
  • 10
  • 21
  • More attention should be paid to this answer as it is the creation of a global color map that requires a huge memory footprint when creating a GIF from many large images. Disabling it will increase GIF size, but it will solve some or most of the memory problems here. – Scott H Aug 03 '15 at 21:19
  • Where i need to set that? – Gaby Fitcal Aug 25 '15 at 10:11
  • @ScottH - Well, not really. `kCGImagePropertyGIFHasGlobalColorMap` won't do anything in a multi-image GIF. 'Multi-image GIF files require individual properties of each image to be set, which means the `kCGImagePropertyGIFImageColorMap` will have no effect when the source images are not themselves GIF files.' – Sebyddd Sep 04 '15 at 01:46
  • @Sebyddd in my tests with that setting I can create gif from nearly 200 frames. – hsafarya Sep 04 '15 at 05:48
  • I agree with @hsafarya. My testing shows that this setting has a large impact on the memory footprint. My hypothesis is that when constructing a global color map for an animated GIF, the algorithm loads many (perhaps all) of the frames into memory at once to create a common (global) color map for them all. With this setting off, it appears not many frames are loaded at once because each image is used to construct its own color map. – Scott H Sep 04 '15 at 21:02
  • I've had this issue in an app of mine for a long time, I can't believe it but this actually works! I added `kCGImagePropertyGIFHasGlobalColorMap` to the GIF properties and it's stopped crashing! – Tim Johnsen Nov 23 '15 at 02:51
  • if you encoded the individual frames as gifs first, and then combined them, could you get by without setting this property (and increasing the size of the gif) – wfbarksdale Jan 15 '16 at 02:25
  • Ouch, this setting increased my GIF from 9MB to 109MB – ninjaneer Jan 29 '16 at 02:42
  • The GIFs created with this method are orders of magnitude larger than the originals. Not really a great option. – jjxtra Aug 25 '16 at 02:25
  • This doesn't work. I've tried it. `@{(NSString *)kCGImagePropertyGIFDictionary: @{(NSString *)kCGImagePropertyGIFDelayTime: @(delayTime)},(NSString *)kCGImagePropertyColorModel:(NSString *)kCGImagePropertyColorModelRGB,(NSString *)kCGImagePropertyGIFHasGlobalColorMap:@NO,}` – Gami Nilesh Jan 18 '17 at 09:34
6

You can use AVFoundation to write a video with your images. I've uploaded a complete working test project to this github repository. When you run the test project in the simulator, it will print a file path to the debug console. Open that path in your video player to check the output.

I'll walk through the important parts of the code in this answer.

Start by creating an AVAssetWriter. I'd give it the AVFileTypeAppleM4V file type so that the video works on iOS devices.

AVAssetWriter *writer = [AVAssetWriter assetWriterWithURL:self.url fileType:AVFileTypeAppleM4V error:&error];

Set up an output settings dictionary with the video parameters:

- (NSDictionary *)videoOutputSettings {
    return @{
             AVVideoCodecKey: AVVideoCodecH264,
             AVVideoWidthKey: @((size_t)size.width),
             AVVideoHeightKey: @((size_t)size.height),
             AVVideoCompressionPropertiesKey: @{
                     AVVideoProfileLevelKey: AVVideoProfileLevelH264Baseline31,
                     AVVideoAverageBitRateKey: @(1200000) }};
}

You can adjust the bit rate to control the size of your video file. I've chosen the codec profile pretty conservatively here (it supports some pretty old devices). You might want to choose a later profile.

Then create an AVAssetWriterInput with media type AVMediaTypeVideo and your output settings.

NSDictionary *outputSettings = [self videoOutputSettings];
AVAssetWriterInput *input = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];

Set up a pixel buffer attribute dictionary:

- (NSDictionary *)pixelBufferAttributes {
    return @{
             fromCF kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA),
             fromCF kCVPixelBufferCGBitmapContextCompatibilityKey: @YES };
}

You don't have to specify the pixel buffer dimensions here; AVFoundation will get them from the input's output settings. The attributes I've used here are (I believe) optimal for drawing with Core Graphics.

Next, create an AVAssetWriterInputPixelBufferAdaptor for your input using the pixel buffer settings.

AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
    assetWriterInputPixelBufferAdaptorWithAssetWriterInput:input
    sourcePixelBufferAttributes:[self pixelBufferAttributes]];

Add the input to the writer and tell the writer to get going:

[writer addInput:input];
[writer startWriting];
[writer startSessionAtSourceTime:kCMTimeZero];

Next we'll tell the input how to get video frames. Yes, we can do this after we've told the writer to start writing:

    [input requestMediaDataWhenReadyOnQueue:adaptorQueue usingBlock:^{

This block is going to do everything else we need to do with AVFoundation. The input calls it each time it's ready to accept more data. It might be able to accept multiple frames in a single call, so we'll loop as long is it's ready:

        while (input.readyForMoreMediaData && self.frameGenerator.hasNextFrame) {

I'm using self.frameGenerator to actually draw the frames. I'll show that code later. The frameGenerator decides when the video is over (by returning NO from hasNextFrame). It also knows when each frame should appear on screen:

            CMTime time = self.frameGenerator.nextFramePresentationTime;

To actually draw the frame, we need to get a pixel buffer from the adaptor:

            CVPixelBufferRef buffer = 0;
            CVPixelBufferPoolRef pool = adaptor.pixelBufferPool;
            CVReturn code = CVPixelBufferPoolCreatePixelBuffer(0, pool, &buffer);
            if (code != kCVReturnSuccess) {
                errorBlock([self errorWithFormat:@"could not create pixel buffer; CoreVideo error code %ld", (long)code]);
                [input markAsFinished];
                [writer cancelWriting];
                return;
            } else {

If we couldn't get a pixel buffer, we signal an error and abort everything. If we did get a pixel buffer, we need to wrap a bitmap context around it and ask frameGenerator to draw the next frame in the context:

                CVPixelBufferLockBaseAddress(buffer, 0); {
                    CGColorSpaceRef rgb = CGColorSpaceCreateDeviceRGB(); {
                        CGContextRef gc = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(buffer), CVPixelBufferGetWidth(buffer), CVPixelBufferGetHeight(buffer), 8, CVPixelBufferGetBytesPerRow(buffer), rgb, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); {
                            [self.frameGenerator drawNextFrameInContext:gc];
                        } CGContextRelease(gc);
                    } CGColorSpaceRelease(rgb);

Now we can append the buffer to the video. The adaptor does that:

                    [adaptor appendPixelBuffer:buffer withPresentationTime:time];
                } CVPixelBufferUnlockBaseAddress(buffer, 0);
            } CVPixelBufferRelease(buffer);
        }

The loop above pushes frames through the adaptor until either the input says it's had enough, or until frameGenerator says it's out of frames. If the frameGenerator has more frames, we just return, and the input will call us again when it's ready for more frames:

        if (self.frameGenerator.hasNextFrame) {
            return;
        }

If the frameGenerator is out of frames, we shut down the input:

        [input markAsFinished];

And then we tell the writer to finish. It'll call a completion handler when it's done:

        [writer finishWritingWithCompletionHandler:^{
            if (writer.status == AVAssetWriterStatusFailed) {
                errorBlock(writer.error);
            } else {
                dispatch_async(dispatch_get_main_queue(), doneBlock);
            }
        }];
    }];

By comparison, generating the frames is pretty straightforward. Here's the protocol the generator adopts:

@protocol DqdFrameGenerator <NSObject>

@required

// You should return the same size every time I ask for it.
@property (nonatomic, readonly) CGSize frameSize;

// I'll ask for frames in a loop. On each pass through the loop, I'll start by asking if you have any more frames:
@property (nonatomic, readonly) BOOL hasNextFrame;

// If you say NO, I'll stop asking and end the video.

// If you say YES, I'll ask for the presentation time of the next frame:
@property (nonatomic, readonly) CMTime nextFramePresentationTime;

// Then I'll ask you to draw the next frame into a bitmap graphics context:
- (void)drawNextFrameInContext:(CGContextRef)gc;

// Then I'll go back to the top of the loop.

@end

For my test, I draw a background image, and slowly cover it up with solid red as the video progresses.

@implementation TestFrameGenerator {
    UIImage *baseImage;
    CMTime nextTime;
}

- (instancetype)init {
    if (self = [super init]) {
        baseImage = [UIImage imageNamed:@"baseImage.jpg"];
        _totalFramesCount = 100;
        nextTime = CMTimeMake(0, 30);
    }
    return self;
}

- (CGSize)frameSize {
    return baseImage.size;
}

- (BOOL)hasNextFrame {
    return self.framesEmittedCount < self.totalFramesCount;
}

- (CMTime)nextFramePresentationTime {
    return nextTime;
}

Core Graphics puts the origin in the lower left corner of the bitmap context, but I'm using a UIImage, and UIKit likes to have the origin in the upper left.

- (void)drawNextFrameInContext:(CGContextRef)gc {
    CGContextTranslateCTM(gc, 0, baseImage.size.height);
    CGContextScaleCTM(gc, 1, -1);
    UIGraphicsPushContext(gc); {
        [baseImage drawAtPoint:CGPointZero];

        [[UIColor redColor] setFill];
        UIRectFill(CGRectMake(0, 0, baseImage.size.width, baseImage.size.height * self.framesEmittedCount / self.totalFramesCount));
    } UIGraphicsPopContext();

    ++_framesEmittedCount;

I call a callback that my test program uses to update a progress indicator:

    if (self.frameGeneratedCallback != nil) {
        dispatch_async(dispatch_get_main_queue(), ^{
            self.frameGeneratedCallback();
        });
    }

Finally, to demonstrate variable frame rate, I emit the first half of the frames at 30 frames per second, and the second half at 15 frames per second:

    if (self.framesEmittedCount < self.totalFramesCount / 2) {
        nextTime.value += 1;
    } else {
        nextTime.value += 2;
    }
}

@end
rob mayoff
  • 375,296
  • 67
  • 796
  • 848
  • This may seem like a strange question, but can this capture a CALayer animation while it is occuring? Which brings me to another question, how can I record something faster if the animation isn't completed so the result is the same as if the user watched the whole animation? Let's say there is an animation which is supposed to take 5 minutes to fill a path, I definitely don't want the recording to take 5 minutes, but I need to preserve the animation(s) in the final video. – klcjr89 Apr 25 '15 at 22:57
  • The times you return from `nextFramePresentationTime` determine the playback speed. AVFoundation will encode the video as fast as it can (which is determined by the speed of your `drawNextFrameInContext:` method and the difficulty of compressing your frames). My test program (on my Mac Pro) encodes the video in a fraction of a second, but the video plays over several seconds in QuickTime Player. – rob mayoff Apr 25 '15 at 23:06
  • As for capturing a CALayer animation, it's not really designed for that. It's possible to step through a CALayer animation by fiddling with its `CAMediaTiming` properties, but I don't recall the details and I can't research it at the moment. – rob mayoff Apr 25 '15 at 23:07
  • Two subsections of [“Advanced Animation Tricks” in the *Core Animation Programming Guide*](https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/CoreAnimation_guide/AdvancedAnimationTricks/AdvancedAnimationTricks.html#//apple_ref/doc/uid/TP40004514-CH8-SW1) seem relevant to stepping through a CALayer animation. – rob mayoff Apr 25 '15 at 23:09
  • Does the nature of AVFoundation write the data out incrementally to disk? Otherwise I could end up with the same problem. – klcjr89 Apr 25 '15 at 23:43
  • Yes, it writes the file incrementally. Try adding this after the call to `CVPixelBufferRelease`: `NSLog(@"fileSize = %llu", [[NSFileManager defaultManager] attributesOfItemAtPath:self.url.path error:nil].fileSize);` and you'll see that the file grows as frames are generated. – rob mayoff Apr 26 '15 at 02:18
  • This is pretty impressive! Just to play around I set the frame count to 10,000 and it only took 14 seconds to make the sample video on my rMBP, and what's more important is that the memory doesn't go up hardly at all. The video file size is crazy small too at 9.8MB. – klcjr89 Apr 26 '15 at 04:02
  • 3
    seems unfair that this is accepted answer, since the OP wants to output a gif, not a video – wfbarksdale Jan 15 '16 at 03:15