22

I just watched the WWDC Video (Session 502 AVAudioEngine in Practice) on AVAudioEngine and am very excited to make an app built on this tech.

I haven't been able to figure out how I might do level monitoring of the microphone input, or a mixer's output.

Can anyone help? To be clear, I'm talking about monitoring the current input signal (and displaying this in the UI), not the input/output volume setting of a channel/track.

I know you can do this with AVAudioRecorder, but this is not an AVAudioNode which the AVAudioEngine requires.

Nischal Hada
  • 3,230
  • 3
  • 27
  • 57
horseshoe7
  • 2,745
  • 2
  • 35
  • 49

6 Answers6

24

Try to install a tap on main mixer, then make it faster by setting the framelength, then read the samples and get average, something like this:

import framework on top

#import <Accelerate/Accelerate.h>

add property

@property float averagePowerForChannel0;
@property float averagePowerForChannel1;

then the below the same>>

self.mainMixer = [self.engine mainMixerNode];
[self.mainMixer installTapOnBus:0 bufferSize:1024 format:[self.mainMixer outputFormatForBus:0] block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
    [buffer setFrameLength:1024];
    UInt32 inNumberFrames = buffer.frameLength;

    if(buffer.format.channelCount>0)
    {
        Float32* samples = (Float32*)buffer.floatChannelData[0];
        Float32 avgValue = 0;

        vDSP_meamgv((Float32*)samples, 1, &avgValue, inNumberFrames);
        self.averagePowerForChannel0 = (LEVEL_LOWPASS_TRIG*((avgValue==0)?-100:20.0*log10f(avgValue))) + ((1-LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0) ;
        self.averagePowerForChannel1 = self.averagePowerForChannel0;
    }

    if(buffer.format.channelCount>1)
    {
        Float32* samples = (Float32*)buffer.floatChannelData[1];
        Float32 avgValue = 0;

        vDSP_meamgv((Float32*)samples, 1, &avgValue, inNumberFrames);
        self.averagePowerForChannel1 = (LEVEL_LOWPASS_TRIG*((avgValue==0)?-100:20.0*log10f(avgValue))) + ((1-LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel1) ;
    }
}];

then, get the target value you want

NSLog(@"===test===%.2f", self.averagePowerForChannel1);

to get the peak values, use vDSP_maxmgv instead of vDSP_meamgv.


LEVEL_LOWPASS_TRIG is a simple filter valued between 0.0 to 1.0, if you set 0.0 you will filter all values and not get any data. If you set it to 1.0 you will get too much noise. Basically the higher the value you will get more variation in data. It seems a value between 0.10 to 0.30 is good for most applications.

Community
  • 1
  • 1
Farhad Malekpour
  • 1,314
  • 13
  • 16
  • 7
    What is the value (or range) used for LEVEL_LOWPASS_TRIG? – apocolipse Aug 02 '16 at 20:56
  • 6
    To use vDSP_meamgv , do "import Accelerate" to use Apple's high performance math framework. – Josh Nov 03 '16 at 11:05
  • 1
    Can you post a complete working example in Github perhaps? – real 19 Jan 20 '17 at 00:48
  • @apocolipse I did not know what to put either... LEVEL_LOWPASS_TRIG=0.01 worked for me. – Josh Feb 08 '17 at 11:54
  • This is the best option. I did the same thing for Swift, so this ObjC syntax was a lifesaver for me on another app. It can be adjusted for different visual representations for volume: waveform chards, simple volume bars, or volume dependent transparency (a fading microphone icon, and so on...). – Josh Feb 08 '17 at 11:56
  • Hi josh, is swift conversion working for you? if so please share. – SaRaVaNaN DM May 03 '17 at 07:25
  • You save my day :) – Shebin Koshy Apr 12 '18 at 12:09
  • Awesome code. @FarhadMalekpour, would you be able to add more comments to what the code is doing and why? – Mr Rogers Feb 22 '19 at 00:09
  • Hey so what are these values suppose to mean? Im getting around -66.0 in a very quiet room, and if I speak it moves to somewhere around -44.0 are these decibels based on the 'v = -100' parameter ? or why are they lower in quiet environments ? – omarojo Jul 11 '19 at 06:39
16

Equivalent Swift 3 code of 'Farhad Malekpour''s answer

import framework on top

import Accelerate

declare globally

private var audioEngine: AVAudioEngine?
    private var averagePowerForChannel0: Float = 0
    private var averagePowerForChannel1: Float = 0
let LEVEL_LOWPASS_TRIG:Float32 = 0.30

use below code in where you required

let inputNode = audioEngine!.inputNode//since i need microphone audio level i have used `inputNode` otherwise you have to use `mainMixerNode`
let recordingFormat: AVAudioFormat = inputNode!.outputFormat(forBus: 0)
 inputNode!.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) {[weak self] (buffer:AVAudioPCMBuffer, when:AVAudioTime) in
                guard let strongSelf = self else {
                    return
                }
                strongSelf.audioMetering(buffer: buffer)
}

calculations

private func audioMetering(buffer:AVAudioPCMBuffer) {
            buffer.frameLength = 1024
            let inNumberFrames:UInt = UInt(buffer.frameLength)
            if buffer.format.channelCount > 0 {
                let samples = (buffer.floatChannelData![0])
                var avgValue:Float32 = 0
                vDSP_meamgv(samples,1 , &avgValue, inNumberFrames)
                var v:Float = -100
                if avgValue != 0 {
                    v = 20.0 * log10f(avgValue)
                }
                self.averagePowerForChannel0 = (self.LEVEL_LOWPASS_TRIG*v) + ((1-self.LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0)
                self.averagePowerForChannel1 = self.averagePowerForChannel0
            }

            if buffer.format.channelCount > 1 {
                let samples = buffer.floatChannelData![1]
                var avgValue:Float32 = 0
                vDSP_meamgv(samples, 1, &avgValue, inNumberFrames)
                var v:Float = -100
                if avgValue != 0 {
                    v = 20.0 * log10f(avgValue)
                }
                self.averagePowerForChannel1 = (self.LEVEL_LOWPASS_TRIG*v) + ((1-self.LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel1)
            }
    }
Shebin Koshy
  • 1,182
  • 8
  • 22
2

Swift 5+

I got help from this project.

  1. download above project & copy 'Microphone.swift' class in your project.

  2. copy paste these fowling codes in your project:

    import AVFoundation
    
    private var mic = MicrophoneMonitor(numberOfSamples: 1)
    private var timer:Timer!
    
    override func viewDidLoad() {
        super.viewDidLoad()
        timer = Timer.scheduledTimer(timeInterval: 0.1, target: self, selector: #selector(startMonitoring), userInfo: nil, repeats: true)
        timer.fire()
    }
    
    @objc func startMonitoring() {
      print("sound level:", normalizeSoundLevel(level: mic.soundSamples.first!))
    }
    
    private func normalizeSoundLevel(level: Float) -> CGFloat {
        let level = max(0.2, CGFloat(level) + 50) / 2 // between 0.1 and 25
        return CGFloat(level * (300 / 25)) // scaled to max at 300 (our height of our bar)
    }
    

3.Open a beer & celebrate!

Ahmadreza
  • 6,950
  • 5
  • 50
  • 69
1

I discovered another solution which is a bit strange, but works perfectly fine and much better than tap. A mixer does not have a AudioUnit, but if you cast it to a AVAudioIONode you can get the AudioUnit and use the metering facility of the iOS. Here is how:

To enable or disable metering:

- (void)setMeteringEnabled:(BOOL)enabled;
{
    UInt32 on = (enabled)?1:0;
    AVAudioIONode *node = (AVAudioIONode*)self.engine.mainMixerNode;
    OSStatus err = AudioUnitSetProperty(node.audioUnit, kAudioUnitProperty_MeteringMode, kAudioUnitScope_Output, 0, &on, sizeof(on));
}

To update meters:

- (void)updateMeters;
{
    AVAudioIONode *node = (AVAudioIONode*)self.engine.mainMixerNode;

    AudioUnitParameterValue level;
    AudioUnitGetParameter(node.audioUnit, kMultiChannelMixerParam_PostAveragePower, kAudioUnitScope_Output, 0, &level);

    self.averagePowerForChannel1 = self.averagePowerForChannel0 = level;
    if(self.numberOfChannels>1)
    {
        err = AudioUnitGetParameter(node.audioUnit, kMultiChannelMixerParam_PostAveragePower+1, kAudioUnitScope_Output, 0, &level);
    }
}
Farhad Malekpour
  • 1,314
  • 13
  • 16
1
#define LEVEL_LOWPASS_TRIG .3

#import "AudioRecorder.h"





@implementation AudioRecord


-(id)init {
     self = [super init];
     self.recordEngine = [[AVAudioEngine alloc] init];

     return self;
}


 /**  ----------------------  Snippet Stackoverflow.com not including Audio Level Meter    ---------------------     **/


-(BOOL)recordToFile:(NSString*)filePath {

     NSURL *fileURL = [NSURL fileURLWithPath:filePath];

     const Float64 sampleRate = 44100;

     AudioStreamBasicDescription aacDesc = { 0 };
     aacDesc.mSampleRate = sampleRate;
     aacDesc.mFormatID = kAudioFormatMPEG4AAC; 
     aacDesc.mFramesPerPacket = 1024;
     aacDesc.mChannelsPerFrame = 2;

     ExtAudioFileRef eaf;

     OSStatus err = ExtAudioFileCreateWithURL((__bridge CFURLRef)fileURL, kAudioFileAAC_ADTSType, &aacDesc, NULL, kAudioFileFlags_EraseFile, &eaf);
     assert(noErr == err);

     AVAudioInputNode *input = self.recordEngine.inputNode;
     const AVAudioNodeBus bus = 0;

     AVAudioFormat *micFormat = [input inputFormatForBus:bus];

     err = ExtAudioFileSetProperty(eaf, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), micFormat.streamDescription);
     assert(noErr == err);

     [input installTapOnBus:bus bufferSize:1024 format:micFormat block:^(AVAudioPCMBuffer *buffer, AVAudioTime *when) {
       const AudioBufferList *abl = buffer.audioBufferList;
       OSStatus err = ExtAudioFileWrite(eaf, buffer.frameLength, abl);
       assert(noErr == err);


       /**  ----------------------  Snippet from stackoverflow.com in different context  ---------------------     **/


       UInt32 inNumberFrames = buffer.frameLength;
       if(buffer.format.channelCount>0) {
         Float32* samples = (Float32*)buffer.floatChannelData[0]; 
         Float32 avgValue = 0;
         vDSP_maxv((Float32*)samples, 1.0, &avgValue, inNumberFrames);
         self.averagePowerForChannel0 = (LEVEL_LOWPASS_TRIG*((avgValue==0)?
                                  -100:20.0*log10f(avgValue))) + ((1- LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0) ;
         self.averagePowerForChannel1 = self.averagePowerForChannel0;
       }

       dispatch_async(dispatch_get_main_queue(), ^{

         self.levelIndicator.floatValue=self.averagePowerForChannel0;

       });     


       /**  ---------------------- End of Snippet from stackoverflow.com in different context  ---------------------     **/

     }];

     BOOL startSuccess;
     NSError *error;

     startSuccess = [self.recordEngine startAndReturnError:&error]; 
     return startSuccess;
}



@end
Paul-J
  • 42
  • 1
-1
#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
#import <AudioToolbox/ExtendedAudioFile.h>
#import <CoreAudio/CoreAudio.h>
#import <Accelerate/Accelerate.h>
#import <AppKit/AppKit.h>

@interface AudioRecord : NSObject {

}

@property (nonatomic) AVAudioEngine *recordEngine;


@property float averagePowerForChannel0;
@property float averagePowerForChannel1;
@property float numberOfChannels;
@property NSLevelIndicator * levelIndicator;


-(BOOL)recordToFile:(NSString*)filePath;

@end
Paul-J
  • 42
  • 1
  • 1
    To use, simply call newAudioRecord = [AudioRecord new]; newAudioRecord.levelIndicator=self.audioLevelIndicator; --- Experimental ( and not great ) [newAudioRecord recordToFile:fullFilePath_Name]; [newAudioRecord.recordEngine stop]; [newAudioRecord.recordEngine reset]; newAudioRecord.recordEngine pause]; To resume: [newAudioRecord.recordEngine startAndReturnError:NULL]; – Paul-J Jul 25 '19 at 02:14