2

Has anyone had any success writing native XCode/Swift code to implement ReplayGain, or similar function, or know of where I can find some example code/tutorial that does this?

I am trying to convert my python code to XCode for a native macOS application. I am using the pydub library in python and am applying a gain adjustment through the apply_gain method to normalize my audio files to a specified dBFS, but I have searched high and low for example code of how to do this in XCode and I'm coming up nearly empty handed. I'm working with Catalina 10.15.4 and XCode 11.5.

I found the below code here and I have modified it to remove the equalizer bands and add in the globalGain option. However, I now need to figure out how to analyze the file to determine the peak db/peak gain and adjust the globalGain accordingly to ensure each mp3 is nearly the same in volume without having to fiddle with the volume all the time.

import AVFoundation

var audioEngine: AVAudioEngine = AVAudioEngine()
var equalizer: AVAudioUnitEQ!
var audioPlayerNode: AVAudioPlayerNode = AVAudioPlayerNode()
var audioFile: AVAudioFile!

// in viewDidLoad():
equalizer = AVAudioUnitEQ(numberOfBands: 0)
audioEngine.attach(audioPlayerNode)
audioEngine.attach(equalizer)
let bands = equalizer.bands
//let globalGain = equalizer.globalGain
let freqs = [60, 230, 910, 4000, 14000]
audioEngine.connect(audioPlayerNode, to: equalizer, format: nil)
audioEngine.connect(equalizer, to: audioEngine.outputNode, format: nil)
equalizer.globalGain = 12
var filePath = "some_music_file.mp3"

do {
        let filePathURL = NSURL.fileURL(withPath: filePath)
        audioFile = try AVAudioFile(forReading: filePathURL)
        audioEngine.prepare()
        try audioEngine.start()
        audioPlayerNode.scheduleFile(audioFile, at: nil, completionHandler: nil)
        audioPlayerNode.play()
} catch {
    print ("An error occured.")
}

SouthernYankee65
  • 1,129
  • 10
  • 22

1 Answers1

0

After many hours of research and digging around, I think I finally have the solution I needed to "normalize" my audio between tracks.

It is quite short actually. I am sending an AVAudioPCMBuffer to the method, and using the Accelerate API to do the math, first computing the RMS (power) and then converting that to a Decibile (dB) level.

I set a default volume level that will then subtract that from the absolute value of dB and pass that value to the AVAudioEQ's.globalGain property.

This appears, at least to my ears, to achieve what I wanted moving over from Python. I have tested it with numerous MP3 that have apparently louder or quieter audio from one track to another and it seems to be working nicely.

If you plan to use this on really big audio files, I'd recommend putting this in it's own thread so it doesn't block. For my application, it seems to be fast enough.

If you know of a better, more efficient way, just comment below.

Swift 5.3

import Accelerate
import AVFoundation

func getDecibles1(buffer: AVAudioPCMBuffer) -> Float {
   // This method will compute the Real Mean Squared (RMS) value of an audio
   // PCM buffer that will return the Decibile (dB) level of an audio signal.
   // RMS will be calculated for all channels with a signal and averaged
   // which then provides true dB of the audio for all channels.
   let frameLength = UInt(buffer.frameLength)
   let channels = Int(buffer.format.channelCount)
   var channelsWithSignal = Float(channels)
   var rms:Float = 0
   var rmsSumOfChannels: Float = 0
   for channel in 0..<channels {
       guard let channelData = buffer.floatChannelData?[channel] else { return 0 }
       vDSP_measqv(channelData, 1, &rms, frameLength)
       if rms > 0 {
           rmsSumOfChannels += rms
       } else {
           channelsWithSignal -= 1
       }
    }
    // If you need the average power, uncomment the line below
    //let avgPower = 20 * log10(rmsSumOfChannels/channelsWithSignal)
    let dB = 10 * log10(rmsSumOfChannels/channelsWithSignal)
    return dB
}
SouthernYankee65
  • 1,129
  • 10
  • 22
  • The docs for `vDSP_measqv` say that it "Calculates the mean of squares in a single-precision vector." Do you need to take the square root of that to get rms? Also, if adding the channel values together the right way to go? Maybe averaging the channel values is better to get a value that is meaningful for the total output? Asking all of these as questions because my understanding level is probably less than yours. – Carl Smith Apr 24 '21 at 15:51
  • I stopped using this method as it was not good enough in a production environment. I have now resorted to running a command line process through ffmpeg’s loud_norm procedure to set each audio file’s LUFS to the same value. I could find anything native in Xcode/Swift. – SouthernYankee65 Apr 25 '21 at 14:13
  • I am curious about what caused you to conclude that this method "was not good enough in a production environment." I am trying to do this in an app that is for personal use only. I am hoping to avoid going through all the songs in my collection to manually have them to be roughly the same loudness to my ears (not necessarily exact same LUFS value). I listen mostly when driving, so exact precision is useless anyway. Since I don't want to normalize every song in my library, sorting which ones to use the command line ffmpeg approach on would be a nightmare. – Carl Smith Apr 26 '21 at 06:46
  • I realized, after looking at the code more closely, that is is averaging the channel values. Duh! Too late to edit my original comment - sorry. – Carl Smith Apr 26 '21 at 06:59
  • Just to follow up, I have abandoned this quasi on-the-fly normalization method and I now use methods to launch an external process that calls ffprobe to get the audio file details, then run a ffmpeg process to trim silence from the beginning and end of my audio files and a subsequent ffmpeg process to normalize the audio to LUFS (-23). During my gigs I am no longer having to mess with the volume/gain. I set it and leave it. It's the best solution I came up with for my situation (DJ type work). Thanks for your assistance! – SouthernYankee65 Aug 16 '21 at 22:08