I'm trying to use AVCaptureSession to capture video from the camera and then I would like to use AVAssetWriter to write the results to a file (specifically, use multiple AVAssetWriters to write the capture to chunk videos, but we don't need to complicate this question with that). However, I'm having trouble figuring out where data actually needs to be passed to the AVAssetWriter. In the Apple Developer documentation I've only seen AVCaptureSession data being passed to an AVCaptureFileOutput. Maybe I'm just missing something though. Can the AVAssetWriter just be used as an output of the capture session? A relevant example or bit of code (while not necessary) would be appreciated. Thank you much!
2 Answers
Take a look at http://www.gdcl.co.uk/2013/02/20/iOS-Video-Encoding.html. This shows how to connect the capture output with the asset writer, and then extracts the data from the asset writer for streaming.
G

- 2,847
- 15
- 9
What's your goal, exactly? Because you're asking for (the use an AVAssetWriter as an output for an AVCaptureSession) isn't possible.
Basically, an AVCaptureSession
object has inputs (eg: a camera, represented by some AVCaptureInput
subclass) and outputs (in the form of AVCaptureOutput
's). And an AVAssetWriter
is not an AVCaptureOutput
subclass, so there is no way to use it directly from an AVCaptureSession.
If you want to use an AVAssetWriter, you'll have to write the data out using an AVCaptureFileOutput
instance, and then read it back with an AVAssetReader
, modify your data somehow, and then output it via an AVAssetWriter
.
Final thing to keep in mind: AVAssetReader
is documented to not guarantee real-time operations.

- 2,505
- 18
- 19
-
1Hmm, interesting. I was trying to go off what I found in the answer [here](http://stackoverflow.com/questions/13851481/http-live-streaming-server-on-iphone) and the upvoted comments [here](http://stackoverflow.com/questions/3444791/streaming-video-from-an-iphone) (among others) whereby they suggest streaming can be achieved by creating two AVAssetWriters and using them one at a time. – golmschenk Apr 09 '13 at 03:31
-
2This seems to be, based on my searching, the most common way that streaming live camera video FROM an iphone is achieved. Now that you've mentioned this I'm more confused as to why this is the method being suggested... – golmschenk Apr 09 '13 at 03:33
-
3You don't need to use an `AVAssetWriter` to break video up into 10s chunks. You can do that with `AVCaptureMovieFileOutput`; set the `maxRecordedDuration` property to 10s, and re-start recording (to a new file) in `-captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:` to get the next 10s. – zadr Apr 09 '13 at 03:40
-
Yep, that's what I was just about to try. But you do get to earn the upvote from me as well for following up on the comments. Thanks! – golmschenk Apr 09 '13 at 03:44
-
1I'm still interested to know why the others were going with the dual AVAssetWriters, but that would be a question to ask them specifically. – golmschenk Apr 09 '13 at 03:46
-
My best guess is that people are adding filters or editing the video in some other fashion, but yeah, you'd have to ask them. – zadr Apr 09 '13 at 03:47
-
@golmschenk According to Apple's docs, zadr's answer is wrong and won't work. Apple has several big alerts stating that this will lose frames during the changeover, and that *only* OS X supports instantaneous changeover. If the question were OS X instead of iOS, yes it would work (according to Apple). – Adam May 29 '13 at 11:50
-
@Adam: My answer might not be ideal, but, that isn't the same thing as being wrong. – zadr May 30 '13 at 17:40
-
1@Adam to expand a bit, AVFoundation, as a framework, does not output video in a format that supports what @golmschenk was trying to accomplish (that is, using the iPhone to stream video in real-time.). Basically, important bits of metadata needed for decoding the video are written at the *end* of the file, even with `AVCaptureOutput`'s `expectsMediaDataInRealTime` property set. Anything that does otherwise is playing dangerously with assumptions about how the video file will be written. – zadr May 30 '13 at 17:42
-
@golmschenk The reason for the dual AVAssetWriters is to break the video output into separate files without dropping any frames. The final goal is to stream video over the Internet from an iPhone by sending those chunks to a server or peer. AVFoundation seems to have way too many classes and is extremely confusing for me… I mean, why is an AVAssetWriter not an AVCaptureOutput subclass? :/ – sudo Aug 28 '14 at 21:26
-
1@golmschenk Isn't another approach to use AVCaptureVideoDataOutput wired to AVAssetWriter as described in https://www.objc.io/issues/23-video/capturing-video/? Then you're not reliant on AVAssetReader and AVCaptureFileOutput. – Crashalot Feb 06 '16 at 23:32