0

I'm building an app that involves playing songs from the user's music library while applying an equalization (EQ) effect. I've only used AudioUnits to generate sound before, so I'm having a bit of trouble.

My current plan is to use AVAssetReader to get the samples, and though I'm a bit fuzzy on that, my question here is with regards to the correct AudioUnit design pattern to use from Apple's documentation: https://developer.apple.com/library/ios/documentation/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/ConstructingAudioUnitApps/ConstructingAudioUnitApps.html#//apple_ref/doc/uid/TP40009492-CH16-SW1.

My guess is that a render callback is needed to perform my EQ effect (I was thinking kAudioUnitSubType_ParametricEQ), so that leaves either the "I/O with a Render Callback Function" pattern or the "Output-Only with a Render Callback Function." If I'm reading data from the music library (potentially via AVAssetReader), which of these two patterns would be the best fit?

Filburt
  • 17,626
  • 12
  • 64
  • 115
Rogare
  • 3,234
  • 3
  • 27
  • 50
  • This thread also proved helpful: http://stackoverflow.com/questions/12264799/why-is-audio-coming-up-garbled-when-using-avassetreader-with-audio-queue – Rogare Feb 20 '14 at 01:26

1 Answers1

1

I think you would need to use an Output-Only with a Render Callback Function. The callback function should be responsible for reading/decoding the audio data, and applying the EQ effect.

By the way, I don't know if this might be useful in any way, but here it says that there's an already existing EQ audio unit that you could use.

Merlevede
  • 8,140
  • 1
  • 24
  • 39