I am studying the audio unit development. First I want to build a macOS app that simply fetch the audio from microphone then send it out to speaker that we can hear it.
engine = AVAudioEngine()
let hardwareFormat = engine.outputNode.outputFormat(forBus: 0)
engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat)
do {
engine.prepare()
try engine.start()
print("engine start success")
} catch {
fatalError("engine start error:\(error)")
}
I create an AVAudioEngine object and connect the main mixer node to the output node. The engine start successfully. But I can't hear any sound. Then I guess maybe the inputNode did not connect to the main mixer node.
let hardwareFormat = engine.outputNode.outputFormat(forBus: 0)
engine.connect(engine.mainMixerNode, to: engine.outputNode, format: hardwareFormat)
engine.connect(engine.inputNode, to: engine.mainMixerNode, format: engine.mainMixerNode.inputFormat(forBus: 0))
It crashes and reports: required condition is false: IsFormatSampleRateAndChannelCountValid(hwFormat)
The info.plist also included the Privacy Mic Usage Description
I got the solution which is adding the hardened runtime capability (Audio In)