2

I have an assetWriterAudioInput being fed a CMSampleBufferRef while recording a live video on iOS 4.1+ . What I want to accomplish is real-time processing the audio samples before handling them over to the assetWriter - save a video while mutating the audio coming from the microphone. Any ideas on ways of doing that ?

Rafael Nobre
  • 5,062
  • 40
  • 40

1 Answers1

1

To my surprise, there was no need to create a new CMSampleBufferRef with the processed signal. Using the samples exposed as in the question to this: Reading audio samples via AVAssetReader , and processing them in-place, the audio works. There are two caveats, tho: 1) The buffer size is very small, around 1024 samples per block, so I don't see how one could create spatial/echo effects without access to further away samples in realtime. 2) I believe CMSampleBufferRef timing information is very strict, so no time stretching would be allowed, it has to be done in-place this way. Restriction #2 is not an issue for me, and #1 is not too cumbersome, as what I'm mostly after is pitch shifting, and that worked out very well using the Dirac LE library.

Community
  • 1
  • 1
Rafael Nobre
  • 5,062
  • 40
  • 40