5

I'm applying an AVMutableAudioMix to a asset I've created, the asset generally consists of 3-5 audio tracks (no video). The goal is to add several volume commands throughout the play time, ie I'd like to set the volume to 0.1 at 1 seconds, 0.5 at 2 seconds, then 0.1 or whatever at 3 seconds. I'm just now trying to do this with an AVPlayer but will also later use it when exporting the AVSession to a file. The problem is that it only seems to care about the first volume command, and seem to ignore all later volume commands. If the first command is to set the volume to 0.1, that will be the permanent volume for this track for the rest of this asset. Despite it really looks like you should be able to add any number of these commands, seeing as the "inputParameters" member of AVMutableAudioMix is really an NSArray which is the series of AVMutableAudioMixInputParameter's. Anyone figured this out?

Edit: I figured this partly out. I'm able to add several volume changes throughout a certain track. But the timings appear way off, I'm not sure how to fix that. For example setting the volume to 0.0 at 5 seconds, then 1.0 at 10 seconds and then back to 0.0 at 15 seconds would make you assume the volume would go on and off promptly at those timings, but the results are always very unpredictable, with ramping of sounds going on, and sometimes working (with sudden volume changes as expected from setVolume). If anyone got the AudioMix to work, please provide an example.

Jonny
  • 15,955
  • 18
  • 111
  • 232
  • The closest I've seen an example is this in the FAQ: http://developer.apple.com/library/ios/#qa/qa1730/_index.html but this example only uses one "setVolumeRampFromStartVolume" where I'm using a bunch of setVolume:s. I've actually tried setVolumeRampFromStartVolume too, but the timings appear weird if you add a couple. – Jonny May 01 '11 at 10:08
  • I'll leave this open for now, though I "solved" it by not using AVFoundation for the volume changes. I found out how to do it with Audio Toolbox, exporting to temporary files, one per track. Still using AVFoundation to create a composition of all temp files, then export too. – Jonny May 07 '11 at 10:37
  • Found, that exporting with `AVAssetExportPresetPassthrough` doesn't work. https://stackoverflow.com/a/56993522/1322703 – Alexander Bekert Jul 11 '19 at 16:23

4 Answers4

8

The code I use to change the track volume is:

AVURLAsset *soundTrackAsset = [[AVURLAsset alloc]initWithURL:trackUrl options:nil];
AVMutableAudioMixInputParameters *audioInputParams = [AVMutableAudioMixInputParameters audioMixInputParameters];

[audioInputParams setVolume:0.5 atTime:kCMTimeZero];
[audioInputParams setTrackID:[[[soundTrackAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]  trackID]];
audioMix = [AVMutableAudioMix audioMix];
audioMix.inputParameters = [NSArray arrayWithObject:audioInputParams];

Don't forget to add the audiomix to your AVAssetExportSession

exportSession.audioMix = audioMix;

However, I notice it does not work with all formats so You can use this function to change the volume level of an stored file if you keep having issues with AVFoundation. However, this function could be quite slow.

-(void) ScaleAudioFileAmplitude:(NSURL *)theURL: (float) ampScale {
    OSStatus err = noErr;

    ExtAudioFileRef audiofile;
    ExtAudioFileOpenURL((CFURLRef)theURL, &audiofile);
    assert(audiofile);

    // get some info about the file's format.
    AudioStreamBasicDescription fileFormat;
    UInt32 size = sizeof(fileFormat);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileDataFormat, &size, &fileFormat);

    // we'll need to know what type of file it is later when we write 
    AudioFileID aFile;
    size = sizeof(aFile);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_AudioFile, &size, &aFile);
    AudioFileTypeID fileType;
    size = sizeof(fileType);
    err = AudioFileGetProperty(aFile, kAudioFilePropertyFileFormat, &size, &fileType);


    // tell the ExtAudioFile API what format we want samples back in
    AudioStreamBasicDescription clientFormat;
    bzero(&clientFormat, sizeof(clientFormat));
    clientFormat.mChannelsPerFrame = fileFormat.mChannelsPerFrame;
    clientFormat.mBytesPerFrame = 4;
    clientFormat.mBytesPerPacket = clientFormat.mBytesPerFrame;
    clientFormat.mFramesPerPacket = 1;
    clientFormat.mBitsPerChannel = 32;
    clientFormat.mFormatID = kAudioFormatLinearPCM;
    clientFormat.mSampleRate = fileFormat.mSampleRate;
    clientFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved;
    err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);

    // find out how many frames we need to read
    SInt64 numFrames = 0;
    size = sizeof(numFrames);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileLengthFrames, &size, &numFrames);

    // create the buffers for reading in data
    AudioBufferList *bufferList = malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (clientFormat.mChannelsPerFrame - 1));
    bufferList->mNumberBuffers = clientFormat.mChannelsPerFrame;
    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        bufferList->mBuffers[ii].mDataByteSize = sizeof(float) * numFrames;
        bufferList->mBuffers[ii].mNumberChannels = 1;
        bufferList->mBuffers[ii].mData = malloc(bufferList->mBuffers[ii].mDataByteSize);
    }

    // read in the data
    UInt32 rFrames = (UInt32)numFrames;
    err = ExtAudioFileRead(audiofile, &rFrames, bufferList);

    // close the file
    err = ExtAudioFileDispose(audiofile);

    // process the audio
    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        float *fBuf = (float *)bufferList->mBuffers[ii].mData;
        for (int jj=0; jj < rFrames; ++jj) {
            *fBuf = *fBuf * ampScale;
            fBuf++;
        } 
    }

    // open the file for writing
    err = ExtAudioFileCreateWithURL((CFURLRef)theURL, fileType, &fileFormat, NULL, kAudioFileFlags_EraseFile, &audiofile);

    // tell the ExtAudioFile API what format we'll be sending samples in
    err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);

    // write the data
    err = ExtAudioFileWrite(audiofile, rFrames, bufferList);

    // close the file
    ExtAudioFileDispose(audiofile);

    // destroy the buffers
    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        free(bufferList->mBuffers[ii].mData);
    }
    free(bufferList);
    bufferList = NULL;

 }

Please also note that you may need to fine tune the ampScale you want depending where your volume value is coming from. The system volume goes from 0 to 1 and can be obtained by calling AudioSessionGetProperty

Float32 volume;
UInt32 dataSize = sizeof(Float32);
AudioSessionGetProperty (
                         kAudioSessionProperty_CurrentHardwareOutputVolume,
                         &dataSize,
                         &volume
                        );
Julio Bailon
  • 3,735
  • 2
  • 33
  • 34
  • Setting volume in AVAssetExportSession has been flakey at best. I can't get the mp3 file I'm trying to change the volume on to work. I used your `-(void) ScaleAudioFileAmplitude:(NSURL *)theURL: (float) ampScale` but still no love ;( – Dex Mar 06 '12 at 22:51
  • What ampScale are you using? It gets noticeable after 4.0f – Julio Bailon Mar 08 '12 at 16:05
  • I'm actually trying to make an mp3 file more quiet by a factor of 0.5f. I can see the samples are getting processed but it seems that the function is editing the mp3 in place and this is giving me a final file result of 4KB. – Dex Mar 08 '12 at 21:19
  • I've narrowed it down and this is the line that is giving an error: `err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);` although the `clientFormat` structure looks fine. – Dex Mar 09 '12 at 02:25
  • Please suggest here https://stackoverflow.com/questions/54804888/setting-multiple-volumes-to-each-video-tracks-using-audiomixinputparameters-avfo – Anand Gautam Feb 21 '19 at 10:36
  • Please suggest here https://stackoverflow.com/questions/54804888/setting-multiple-volumes-to-each-video-tracks-using-audiomixinputparameters-avfo – Anand Gautam Feb 21 '19 at 10:36
2

The Audio Extension Toolbox function doesn't quite work anymore as is due to API changes. It now requires you to setup a category. When setting the export properties I was getting an error code of '?cat' (which the NSError will print out in decimal).

Here is the code that works now in iOS 5.1. It is incredibly slow too, just by looking I'd say several times slower. It is also memory intensive since it appear to load the file into memory, which generates memory warnings for 10MB mp3 files.

-(void) scaleAudioFileAmplitude:(NSURL *)theURL withAmpScale:(float) ampScale
{
    OSStatus err = noErr;

    ExtAudioFileRef audiofile;
    ExtAudioFileOpenURL((CFURLRef)theURL, &audiofile);
    assert(audiofile);

    // get some info about the file's format.
    AudioStreamBasicDescription fileFormat;
    UInt32 size = sizeof(fileFormat);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileDataFormat, &size, &fileFormat);

    // we'll need to know what type of file it is later when we write 
    AudioFileID aFile;
    size = sizeof(aFile);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_AudioFile, &size, &aFile);
    AudioFileTypeID fileType;
    size = sizeof(fileType);
    err = AudioFileGetProperty(aFile, kAudioFilePropertyFileFormat, &size, &fileType);


    // tell the ExtAudioFile API what format we want samples back in
    AudioStreamBasicDescription clientFormat;
    bzero(&clientFormat, sizeof(clientFormat));
    clientFormat.mChannelsPerFrame = fileFormat.mChannelsPerFrame;
    clientFormat.mBytesPerFrame = 4;
    clientFormat.mBytesPerPacket = clientFormat.mBytesPerFrame;
    clientFormat.mFramesPerPacket = 1;
    clientFormat.mBitsPerChannel = 32;
    clientFormat.mFormatID = kAudioFormatLinearPCM;
    clientFormat.mSampleRate = fileFormat.mSampleRate;
    clientFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved;
    err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);

    // find out how many frames we need to read
    SInt64 numFrames = 0;
    size = sizeof(numFrames);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileLengthFrames, &size, &numFrames);

    // create the buffers for reading in data
    AudioBufferList *bufferList = malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (clientFormat.mChannelsPerFrame - 1));
    bufferList->mNumberBuffers = clientFormat.mChannelsPerFrame;
    //printf("bufferList->mNumberBuffers = %lu \n\n", bufferList->mNumberBuffers);

    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        bufferList->mBuffers[ii].mDataByteSize = sizeof(float) * numFrames;
        bufferList->mBuffers[ii].mNumberChannels = 1;
        bufferList->mBuffers[ii].mData = malloc(bufferList->mBuffers[ii].mDataByteSize);
    }

    // read in the data
    UInt32 rFrames = (UInt32)numFrames;
    err = ExtAudioFileRead(audiofile, &rFrames, bufferList);

    // close the file
    err = ExtAudioFileDispose(audiofile);

    // process the audio
    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        float *fBuf = (float *)bufferList->mBuffers[ii].mData;
        for (int jj=0; jj < rFrames; ++jj) {
            *fBuf = *fBuf * ampScale;
            fBuf++;
        } 
    }

    // open the file for writing
    err = ExtAudioFileCreateWithURL((CFURLRef)theURL, fileType, &fileFormat, NULL, kAudioFileFlags_EraseFile, &audiofile);

    NSError *error = NULL;

/*************************** You Need This Now ****************************/
        AVAudioSession *session = [AVAudioSession sharedInstance];
        [session setCategory:AVAudioSessionCategoryAudioProcessing error:&error];
/************************* End You Need This Now **************************/

    // tell the ExtAudioFile API what format we'll be sending samples in
    err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);

    error = [NSError errorWithDomain:NSOSStatusErrorDomain
                                         code:err
                                     userInfo:nil];
    NSLog(@"Error: %@", [error description]);

    // write the data
    err = ExtAudioFileWrite(audiofile, rFrames, bufferList);

    // close the file
    ExtAudioFileDispose(audiofile);

    // destroy the buffers
    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        free(bufferList->mBuffers[ii].mData);
    }
    free(bufferList);
    bufferList = NULL;

}
Dex
  • 12,527
  • 15
  • 69
  • 90
0

Thanks for the help provided in this post. I just would like to add one thing as you should restore the AVAudioSession back to what it was or you'll end up not playing anything.

AVAudioSession *session = [AVAudioSession sharedInstance];
NSString *originalSessionCategory = [session category];
[session setCategory:AVAudioSessionCategoryAudioProcessing error:&error];
...
...
// restore category
[session setCategory:originalSessionCategory error:&error];
if(error)
    NSLog(@"%@",[error localizedDescription]);

Cheers

alamatula
  • 232
  • 4
  • 9
0

For Setting the different volumes of Mutable Tracks you can use below Code

self.audioMix = [AVMutableAudioMix audioMix];
AVMutableAudioMixInputParameters *audioInputParams = [AVMutableAudioMixInputParameters audioMixInputParameters];

[audioInputParams setVolume:0.1 atTime:kCMTimeZero];
[audioInputParams setVolume:0.1 atTime:kCMTimeZero];
audioInputParams.trackID = compositionAudioTrack2.trackID;


AVMutableAudioMixInputParameters *audioInputParams1 = [AVMutableAudioMixInputParameters audioMixInputParameters];
[audioInputParams1 setVolume:0.9 atTime:kCMTimeZero];
audioInputParams1.trackID = compositionAudioTrack1.trackID;

AVMutableAudioMixInputParameters *audioInputParams2 = [AVMutableAudioMixInputParameters audioMixInputParameters];
[audioInputParams2 setVolume:0.3 atTime:kCMTimeZero];
audioInputParams2.trackID = compositionAudioTrack.trackID;


self.audioMix.inputParameters =[NSArray arrayWithObjects:audioInputParams,audioInputParams1,audioInputParams2, nil];
Superdev
  • 562
  • 7
  • 13
  • 1
    This is how it is supposed to work. I remember nightmares of the setVolume being very buggy. In my app I had to process wav packets manually. Hopefully this bug has been fixed in iOS. – Dex Oct 10 '13 at 00:01
  • 1
    Year is 2022 and setVolume and setVolumeRamp are still not working. This is crazy!!! – Dejan Sep 10 '22 at 19:59