In my iOS app, I need to save an image as a short video segment. I have this working using AVAssetWriter and AVAssetWriterPixelBufferAdaptor, thanks to some of the great posts on this site, but I've had to fudge the start and end session times, and presentation times, because I don't really understand them.
The following fragment creates a 2 second video, but I've set the various times by trial and error. I'm not sure why it doesn't create a 3 second video, to be honest.
// start session
videoWriter.movieFragmentInterval = CMTimeMake(1,600);
[videoWriter startWriting];
CMTime startTime = CMTimeMake(0, 600);
[videoWriter startSessionAtSourceTime:startTime];
while (1) {
if (![writerInput isReadyForMoreMediaData]) {
NSLog(@"Not ready for data");
} else {
[avAdaptor appendPixelBuffer:pixelBuffer
withPresentationTime:CMTimeMake(1200,600)];
break;
}
}
//Finish the session:
[writerInput markAsFinished];
CMTime endTime = CMTimeMake(1800, 600);
[videoWriter endSessionAtSourceTime:endTime];
[videoWriter finishWriting];
Can anyone explain the various time settings in this fragment, or point me to a document that will help? I've read the apple docs until I'm cross-eyed, but they assume more knowledge than I currently have, I guess.
TIA: John