2

Recently, I wanted to get my hands dirty with Core Audio, so I started working on a simple desktop app that will apply effects (eg. echo) on the microphone data in real-time and then the processed data can be used on communication apps (eg. Skype, Zoom, etc).

To do that, I figured that I have to create a virtual microphone, to be able to send processed (with the applied effects) data over communication apps. For example, the user will need to select this new microphone (virtual) device as Input Device in a Zoom call so that the other users in the call can hear her with her voiced being processed.

My main concern is that I need to find a way to "route" the voice data captured from the physical microphone (eg. the built-in mic) to the virtual microphone. I've spent some time reading the book "Learning Core Audio" by Adamson and Avila, and in Chapter 8 the author explains how to write an app that a) uses an AUHAL in order to capture data from the system's default input device and b) then sends the data to the system's default output using an AUGraph. So, following this example, I figured that I also need to do create an app that captures the microphone data only when it's running.

So, what I've done so far:

  1. I've created the virtual microphone, for which I followed the NullAudio driver example from Apple.
  2. I've created the app that captures the microphone data.

For both of the above "modules" I'm certain that they work as expected independently, since I've tested them with various ways. The only missing piece now is how to "connect" the physical mic with the virtual mic. I need to connect the output of the physical microphone with the input of the virtual microphone.

So, my questions are:

  • Is this something trivial that can be achieved using the AUGraph approach, as described in the book? Should I just find the correct way to configure the graph in order to achieve this connection between the two devices?
  • The only related thread I found is this, where the author states that the routing is done by

sending this audio data to driver via socket connection So other apps that request audio from out virtual mic in fact get this audio from user-space application that listen for mic at the same time (so it should be active)

but I'm not quite sure how to even start implementing something like that.

  • The whole process I did for capturing data from the microphone seems quite long and I was thinking if there's a more optimal way to do this. The book seems to be from 2012 with some corrections done in 2014. Has Core Audio changed dramatically since then and this process can be achieved more easily with just a few lines of code?
nifior
  • 31
  • 1
  • 2

1 Answers1

1

I think you'll get more results by searching for the term "play through" instead of "routing".

The Adamson / Avila book has an ideal play through example that unfortunately for you only works for when both input and output are handled by the same device (e.g. the built in hardware on most mac laptops and iphone/ipad devices).

Note that there is another audio device concept called "playthru" (see kAudioDevicePropertyPlayThru and related properties) which seems to be a form of routing internal to a single device. I wish it were a property that let you set a forwarding device, but alas, no. Some informal doco on this: https://lists.apple.com/archives/coreaudio-api/2005/Aug/msg00250.html

I've never tried it but you should be able to connect input to output on an AUGraph like this. AUGraph is however deprecated in favour of AVAudioEngine which last time I checked did not handle non default input/output devices well.

I instead manually copy buffers from the input device to the output device via a ring buffer (TPCircularBuffer works well). The devil is in the detail, and much of the work is deciding on what properties you want and their consequences. Some common and conflicting example properties:

  1. minimal lag
  2. minimal dropouts
  3. no time distortion

In my case, if output is lagging too much behind input, I brutally dump everything bar 1 or 2 buffers. There is some dated Apple sample code called CAPlayThrough which elegantly speeds up the output stream. You should definitely check this out.

And if you find a simpler way, please tell me!

Update
I found a simpler way:

  1. create an AVCaptureSession that captures from your mic
  2. add an AVCaptureAudioPreviewOutput that references your virtual device

When routing from microphone to headphones, it sounded like it had a few hundred milliseconds' lag, but if AVCaptureAudioPreviewOutput and your virtual device handle timestamps properly, that lag may not matter.

Rhythmic Fistman
  • 34,352
  • 5
  • 87
  • 159
  • Hi, thank you for your answer! I will check the playthrough property more carefully, thanks! The `CAPlaythrough` example is close to what I'm currently doing in my app. I'm using the input AUHAL to capture data from the default mic and writing it to my ring buffer inside the input render callback I set for the unit. I know that the captured data is correct, because when I create an `AUGraph` with the default output as the one node of the graph and replace this unit's `ioData` with those from my ring buffer, then Im hearing the captured data. – nifior Jan 09 '21 at 17:40
  • (Continuing) Ideally, I want to replace the `AUGraph` node's default output with my virtual mic device and then change it's ioData with the data from my ring buffer. But the more I think I about it the more I believe that this is not the way to go (?). What should be the scope and bus for this unit's callback? If I set a render callback to the input scope of the input bus, then the callback is never called. Maybe all this can work with the combination of the playthrough scope you mentioned, or I'm missing something very critical in this whole setup. Sorry, if this got confusing...! – nifior Jan 09 '21 at 17:50
  • Hi - this answer is a broad overview of the play through landscape. If you still have all the questions in this comment can you split them out into several new questions? p.s. I was trying to say `kAudioDevicePropertyPlayThru` is NOT what you want. Also check out the `AVCaptureAudioPreviewOutput` update. It's a very simply way of doing playthrough if you're not so worried about latency. – Rhythmic Fistman Jan 21 '21 at 16:08