1

If you want to create two versions of the same video input, one filtered and one untouched, are there drawbacks to using two AVAssetWriters simultaneously?

According to this SO post, it's not possible (at least in 2011) to use AVCaptureVideoDataOutput together with AVCaptureMovieFileOutput, so hopefully using two AVAssetWriters together solves the problem. Just would like to know if there are non-obvious gotchas to be aware of.

Community
  • 1
  • 1
Crashalot
  • 33,605
  • 61
  • 269
  • 439

1 Answers1

2

We've used two AVAssetWriters without an issue. There's no real "gotchas" I can think of, but some considerations:

  • Obviously the older the hardware, the more it's going to struggle (we're using iPhone 6's and up with no issues at all).
  • Size of the output file makes a difference to performance, so for fastest compilation, consider smaller resolutions.
  • It's unclear if you're compositing live or post-processing. If you're post-processing, you shouldn't have any issues (other than it will be slightly slower), but if you're writing live, then you might see missed buffers if performance is suffering.

In my experience of trying this, can't see any reason not to give the solution a go, it should work fine.

Tim Bull
  • 2,375
  • 21
  • 25
  • Cool, thanks! How did you match the output video to the video preview? In other words, if you use 1920x1080 for the capture resolution on a 5S, it shows a crisp 320x568 preview using AVLayerVideoGravityResizeAspectFill, but when scaling the video down, the final version is not as sharp. Is your output as sharp as the preview? Thanks! – Crashalot Feb 08 '16 at 21:14
  • BTW Mixbit is an awesome app! – Crashalot Feb 08 '16 at 21:39
  • I'd look into what you're passing as the outputSettings for your AVAssetWriterInput. That's where you control the quality that's going to be written out. The way in which the buffer is displayed via AVCaptureVideoPreviewLayer and what your write to disk should be two separate things. Make sure you're happy with the unfiltered output you're getting before playing around with filters on the CMSampleBuffer, that way you'll know if it's your manipulations degrading quality rather than the write pipeline. – Tim Bull Feb 08 '16 at 22:00
  • Thanks! Yup, understood the preview layer and the output are different things, and therein lies the problem: matching the output to the preview. Unfortunately, it's hard to mimic the sharpness of AVLayerVideoGravityResizeAspectFill in the preview layer. Are you using high resolution for capture/preview and able to produce video as crisp as the preview? – Crashalot Feb 08 '16 at 23:08
  • Check what settings you;re using with the AVAssetWriterInput. We're using this: let videoSettings: [String : AnyObject] = [ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : captureSize.width, AVVideoHeightKey : captureSize.height, ] I can't say I've noticed any difference in the quality between the Capture and Compiled output (it might be there, but it's never caught my attention). – Tim Bull Feb 11 '16 at 01:08
  • Thanks! Right now, we're using AVAssetExportSession and the quality is noticeably worse; hence the questions about AVAssetWriter to see if it's a viable alternative. Did you ever try AVAssetExportSession, or have you always used AVAssetWriter? Thanks again. – Crashalot Feb 11 '16 at 17:53
  • Put another way, the goal is to mimic the sharpness of the preview layer, to somehow reproduce what AVLayerVideoGravityResizeAspectFill does when it scales the video to fit the preview. Full q here: http://stackoverflow.com/questions/35261603/simulate-avlayervideogravityresizeaspectfill-crop-and-center-video-to-mimic-pre – Crashalot Feb 12 '16 at 04:32