Inside my iOS 8.0. App I need to apply some custom audio processing on (non-realtime) audio playback. Typically, the audio comes from a device-local audio file.
Currently, I use MTAudioProcessingTap
on a AVMutableAudioMix
. Inside the process
callback I then call my processing code. In certain cases this processing code may produce more samples than the amount of samples being passed in and I wonder what's the best way to handle this (think time stretching effect for example)
The process
callback takes an incoming CMItemCount *numberFramesOut
argument that signals the amount of outgoing frames. For in-place processing where the amount of incoming frames and outgoing frames is identical this is no problem. In the case where my processing generates more samples I need a way to get the playback going until my output buffers are emptied.
Is MTAudioProcessingTap
the right choice here anyway?