5

I would like to do this very simple thing: playing PCM audio data from memory.

The audio samples will come from sound-synthesis algorithms, pre-loaded sample files or whatever. My question is really about how to play the buffers, not how to fill them with data.

So I'm looking for the best way to re-implement my old, deprecated AudioWrapper (which was based on AudioUnits V1), but I could not find in the Apple Documentation an API that would fulfill the following:

  • Compatible with 10.5 through 10.7.
  • Available in ios.
  • Does not rely on a third-party library.
  • Be future proof (for example: not based on Carbon, 64 bits...).

I'm considering using OpenAL, but is it really the best option ? I've seen negative opinions about it, it might be too complex and overkill, and might add performance overhead ?

At worse, I could have two different implementations of that AudioWrapper, but if possible, I'd really like to avoid having one version for each system (ios, 10.5, 10.6, 10.7...). Also, it will be in C++.

EDIT: I need a good latency, the system must respond to user interactions in under 20 ms (the buffers must be between 128 and 512 samples at 44KHz)

Community
  • 1
  • 1
Jem
  • 2,255
  • 18
  • 25
  • OpenAL adds a bit more overhead, and setup takes longer than using the native audio APIs, so on iOS I would suggest using either AudioQueues or AVFoundation. – lucius Feb 09 '12 at 23:24

2 Answers2

3

AudioQueues are quite common. However, their I/O buffer sizes are large enough that they are not ideal for interactive I/O (e.g. a synth).

For lower latency, try AudioUnits -- the MixerHost sample may be a good starting point.

justin
  • 104,054
  • 14
  • 179
  • 226
  • Yes I saw this API in the Apple documentation, but did not really dig into it. It seemed a bit too "hi level" for my simple needs, but that's not necessary a bad thing, so why not. Is there a way to have a good latency with AudioQueues ? I need to work with buffers between 128 and 512 samples. – Jem Feb 04 '12 at 18:24
  • @Jem yup, AQs are a middle ground. for the frequency you want, go with AUs (updated) – justin Feb 04 '12 at 18:53
1

Not sure about OS X 10.5, but I'm directly using the Audio Units API for low-latency audio analysis and synthesis on OS X 10.6, 10.7, and iOS 3.x thru 5.x. My wrapper file to generalize the API came to only a few hundred lines of plain C, with a few ifdefs.

The latency of Audio Queues was too high for my low latency stuff on iOS, whereas the iOS RemoteIO Audio Unit seems to allow buffers as short as 256 samples (but sometimes only down to 1024 when the display goes off) at a 44100 sample rate.

hotpaw2
  • 70,107
  • 14
  • 90
  • 153
  • Good news. I quickly realized that the functions I wanted to use were introduced in 8.6 or deprecated since 8.5, and that the AudioUnit API was slightly different between ios and osx (for things like 'kAudioUnitSubType_RemoteIO'). But if it's possible to deal easily with that minor differences with a few preprocessor directives, it will do. I guess I'll make a separate implementation for 10.5. But seriously, Apple, this is a mess. – Jem Feb 04 '12 at 22:58