14

I'm writing some frames to video with AVAssetWriterInputPixelBufferAdaptor, and the behavior w.r.t. time isn't what I'd expect.

If I write just one frame:

 [videoWriter startSessionAtSourceTime:kCMTimeZero];
 [adaptor appendPixelBuffer:pxBuffer withPresentationTime:kCMTimeZero];

this gets me a video of length zero, which is what I expect.

But if I go on to add a second frame:

 // 3000/600 = 5 sec, right?
 CMTime nextFrame = CMTimeMake(3000, 600); 
 [adaptor appendPixelBuffer:pxBuffer withPresentationTime:nextFrame];

I get ten seconds of video, where I'm expecting five.

What's going on here? Does withPresentationTime somehow set both the start of the frame and the duration?

Note that I'm not calling endSessionAtSourceTime, just finishWriting.

Dhaivat Pandya
  • 6,499
  • 4
  • 29
  • 43
David Moles
  • 48,006
  • 27
  • 136
  • 235
  • Why don’t you call `endSessionAtSourceTime`? I think you have to do that for the export to work correctly (if I remember the code right). – zoul Apr 27 '11 at 18:53
  • 2
    According to the [docs](http://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAssetWriter_Class/Reference/Reference.html#//apple_ref/doc/uid/TP40009518-CH1-SW6), `endSessionAtSourceTime` is optional: "if you call `finishWriting` without calling this method, the session's effective end time will be the latest end timestamp of the session's samples." But I don't understand how the end timestamp is set -- it feels like Apple forgot a parameter. – David Moles Apr 27 '11 at 19:02
  • Hmm, I guess I just got you off the track. (Won’t hurt to try `endSessionAtSourceTime` anyway, though.) – zoul Apr 27 '11 at 19:48

3 Answers3

6

Try looking at this example and reverse engineering to add 1 frame 5 seconds later...

Here is the sample code link: git@github.com:RudyAramayo/AVAssetWriterInputPixelBufferAdaptorSample.git

Here is the code you need:

- (void) testCompressionSession
{
CGSize size = CGSizeMake(480, 320);


NSString *betaCompressionDirectory = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/Movie.m4v"];

NSError *error = nil;

unlink([betaCompressionDirectory UTF8String]);

//----initialize compression engine
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory]
                                                       fileType:AVFileTypeQuickTimeMovie
                                                          error:&error];
NSParameterAssert(videoWriter);
if(error)
    NSLog(@"error = %@", [error localizedDescription]);

NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,
                               [NSNumber numberWithInt:size.width], AVVideoWidthKey,
                               [NSNumber numberWithInt:size.height], AVVideoHeightKey, nil];
AVAssetWriterInput *writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:
                                                       [NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];

AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
                                                                                                                 sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);

if ([videoWriter canAddInput:writerInput])
    NSLog(@"I can add this input");
else
    NSLog(@"i can't add this input");

[videoWriter addInput:writerInput];

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];

//---
// insert demo debugging code to write the same image repeated as a movie

CGImageRef theImage = [[UIImage imageNamed:@"Lotus.png"] CGImage];

dispatch_queue_t    dispatchQueue = dispatch_queue_create("mediaInputQueue", NULL);
int __block         frame = 0;

[writerInput requestMediaDataWhenReadyOnQueue:dispatchQueue usingBlock:^{
    while ([writerInput isReadyForMoreMediaData])
    {
        if(++frame >= 120)
        {
            [writerInput markAsFinished];
            [videoWriter finishWriting];
            [videoWriter release];
            break;
        }

        CVPixelBufferRef buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage:theImage size:size];
        if (buffer)
        {
            if(![adaptor appendPixelBuffer:buffer withPresentationTime:CMTimeMake(frame, 20)])
                NSLog(@"FAIL");
            else
                NSLog(@"Success:%d", frame);
            CFRelease(buffer);
        }
    }
}];

NSLog(@"outside for loop");

}


- (CVPixelBufferRef )pixelBufferFromCGImage:(CGImageRef)image size:(CGSize)size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, 
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, &pxbuffer);
// CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &pxbuffer);

NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL); 

CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);

CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);

CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);

CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);

CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

return pxbuffer;
}
Orbitus007
  • 685
  • 6
  • 12
  • I am trying this code but for first frame, I get Success and fail for all the next one's. I am trying to capture only two frames per second (0, 0.5, 1, 1.5 seconds) and then append them to the adaptor. Any help? – nr5 Jul 08 '17 at 10:04
0

Have you tried using this as your first call

CMTime t = CMTimeMake(0, 600);
[videoWriter startSessionAtSourceTime:t];
[adaptor appendPixelBuffer:pxBuffer withPresentationTime:t];
Steve McFarlin
  • 3,576
  • 1
  • 25
  • 24
  • Instead of `kCMTimeZero`? No, that didn't occur to me. Should it make a difference? – David Moles May 11 '11 at 16:18
  • 1
    Yes instead of kCMTimeZero. The reason I suggest this is your timescale is going to be different between the two calls. The kCMTimeZero has a timescale of 0, where as your next call has a timescale of 600. I am not positive this will make a difference as 0/timescale is 0 anyway. I would try it anyway as AVFoundation may use the timescale directly (e.g. to set properties in the header of a MOV file). – Steve McFarlin May 12 '11 at 21:09
0

According to the documentation of -[AVAssetWriterInput appendSampleBuffer:] method:

For track types other than audio tracks, to determine the duration of all samples in the output file other than the very last sample that's appended, the difference between the sample buffer's output DTS and the following sample buffer's output DTS will be used. The duration of the last sample is determined as follows:

  1. If a marker sample buffer with kCMSampleBufferAttachmentKey_EndsPreviousSampleDuration is appended following the last media-bearing sample, the difference between the output DTS of the marker sample buffer and the output DTS of the last media-bearing sample will be used.
  2. If the marker sample buffer is not provided and if the output duration of the last media-bearing sample is valid, it will be used.
  3. if the output duration of the last media-bearing sample is not valid, the duration of the second-to-last sample will be used.

So, basically you are in the situation #3:

  • The first sample's duration is 5s, based on the PTS different between first and second sample
  • The second sample's duration is 5s too, because the duration of the second-to-last sample was used
naituw
  • 1,087
  • 10
  • 14