24

I'm running a SIP audio streaming app on iOS 6.1.3 iPad2 and new iPad.

I start my app on my iPad (nothing plugged in).
Audio works.
I plug in the headphones.
The app crashes: malloc: error for object 0x....: pointer being freed was not allocated or EXC_BAD_ACCESS

Alternatively:

I start my app on my iPad (with the headphones plugged in).
Audio comes out of the headphones.
I unplug the headphones.
The app crashes: malloc: error for object 0x....: pointer being freed was not allocated or EXC_BAD_ACCESS

The app code employs AudioUnit api based on http://code.google.com/p/ios-coreaudio-example/ sample code (see below).

I use a kAudioSessionProperty_AudioRouteChange callback to get change awareness. So there are three callbacks fro the OS sound manager:
1) Process recorded mic samples
2) Provide samples for the speaker
3) Inform audio HW presence

After lots of tests my feeling is that the tricky code is the one that performs the mic capture. After the plug/unplugged action, the most of the times the recording callback is being called a few times before the RouteChange is called causing later 'segmentation fault' and RouteChange callback is never called. Being more specific I think that AudioUnitRender function causes a 'memory bad access' while an Exception is not thrown at all.

My feeling is that a non-atomic recording callback code races with the OS updating of the structures related to sound devices. So as much non-atomic is the recording callback more likely the concurrency of OS HW update and recording callback.

I modified my code to leave the recording callback as thin as possible but my feeling is that the high processing load brought by other threads of my app is feeding concurrency races described before. So the malloc/free error rises in other parts of the code due to the AudioUnitRender bad access.

I tried to reduce recording callback latency by:

UInt32 numFrames = 256;
UInt32 dataSize = sizeof(numFrames);

AudioUnitSetProperty(audioUnit,
    kAudioUnitProperty_MaximumFramesPerSlice,
    kAudioUnitScope_Global,
    0,
    &numFrames,
    dataSize);

and I tried to boost the problematic code:

dispatch_async(dispatch_get_main_queue(), ^{

Does anybody has a tip or solution for that? In order to reproduce the error here is my audio session code:

//
//  IosAudioController.m
//  Aruts
//
//  Created by Simon Epskamp on 10/11/10.
//  Copyright 2010 __MyCompanyName__. All rights reserved.
//

#import "IosAudioController.h"
#import <AudioToolbox/AudioToolbox.h>

#define kOutputBus 0
#define kInputBus 1

IosAudioController* iosAudio;

void checkStatus(int status) {
    if (status) {
        printf("Status not 0! %d\n", status);
        // exit(1);
    }
}

/**
 * This callback is called when new audio data from the microphone is available.
 */
static OSStatus recordingCallback(void *inRefCon, 
    AudioUnitRenderActionFlags *ioActionFlags, 
    const AudioTimeStamp *inTimeStamp, 
    UInt32 inBusNumber, 
    UInt32 inNumberFrames, 
    AudioBufferList *ioData) {

    // Because of the way our audio format (setup below) is chosen:
    // we only need 1 buffer, since it is mono
    // Samples are 16 bits = 2 bytes.
    // 1 frame includes only 1 sample

    AudioBuffer buffer;

    buffer.mNumberChannels = 1;
    buffer.mDataByteSize = inNumberFrames * 2;
    buffer.mData = malloc( inNumberFrames * 2 );

    // Put buffer in a AudioBufferList
    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0] = buffer;

    NSLog(@"Recording Callback 1 0x%x ? 0x%x",buffer.mData, 
        bufferList.mBuffers[0].mData);

    // Then:
    // Obtain recorded samples

    OSStatus status;
    status = AudioUnitRender([iosAudio audioUnit],
        ioActionFlags, 
        inTimeStamp,
        inBusNumber,
        inNumberFrames,
        &bufferList);
        checkStatus(status);

    // Now, we have the samples we just read sitting in buffers in bufferList
    // Process the new data
    [iosAudio processAudio:&bufferList];

    NSLog(@"Recording Callback 2 0x%x ? 0x%x",buffer.mData, 
        bufferList.mBuffers[0].mData);

    // release the malloc'ed data in the buffer we created earlier
    free(bufferList.mBuffers[0].mData);

    return noErr;
}

/**
 * This callback is called when the audioUnit needs new data to play through the
 * speakers. If you don't have any, just don't write anything in the buffers
 */
static OSStatus playbackCallback(void *inRefCon, 
    AudioUnitRenderActionFlags *ioActionFlags, 
    const AudioTimeStamp *inTimeStamp, 
    UInt32 inBusNumber, 
    UInt32 inNumberFrames, 
    AudioBufferList *ioData) {
        // Notes: ioData contains buffers (may be more than one!)
        // Fill them up as much as you can.
        // Remember to set the size value in each 
        // buffer to match how much data is in the buffer.

    for (int i=0; i < ioData->mNumberBuffers; i++) {
        // in practice we will only ever have 1 buffer, since audio format is mono
        AudioBuffer buffer = ioData->mBuffers[i];

        // NSLog(@"  Buffer %d has %d channels and wants %d bytes of data.", i, 
            buffer.mNumberChannels, buffer.mDataByteSize);

        // copy temporary buffer data to output buffer
        UInt32 size = min(buffer.mDataByteSize,
            [iosAudio tempBuffer].mDataByteSize);

        // dont copy more data then we have, or then fits
        memcpy(buffer.mData, [iosAudio tempBuffer].mData, size);
        // indicate how much data we wrote in the buffer
        buffer.mDataByteSize = size;

        // uncomment to hear random noise
        /*
         * UInt16 *frameBuffer = buffer.mData;
         * for (int j = 0; j < inNumberFrames; j++) {
         *     frameBuffer[j] = rand();
         * }
         */
    }

    return noErr;
}

@implementation IosAudioController
@synthesize audioUnit, tempBuffer;

void propListener(void *inClientData,
    AudioSessionPropertyID inID,
    UInt32 inDataSize,
    const void *inData) {

    if (inID == kAudioSessionProperty_AudioRouteChange) {

        UInt32 isAudioInputAvailable;
        UInt32 size = sizeof(isAudioInputAvailable);
        CFStringRef newRoute;
        size = sizeof(CFStringRef);

        AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &size, &newRoute);

        if (newRoute) {
            CFIndex length = CFStringGetLength(newRoute);
            CFIndex maxSize = CFStringGetMaximumSizeForEncoding(length,
                kCFStringEncodingUTF8);

            char *buffer = (char *)malloc(maxSize);
            CFStringGetCString(newRoute, buffer, maxSize,
                kCFStringEncodingUTF8);

            //CFShow(newRoute);
            printf("New route is %s\n",buffer);

            if (CFStringCompare(newRoute, CFSTR("HeadsetInOut"), NULL) == 
                kCFCompareEqualTo) // headset plugged in
            {
                printf("Headset\n");
            } else {
                printf("Another device\n");

                UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
                AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,
                    sizeof (audioRouteOverride),&audioRouteOverride);
            }
            printf("New route is %s\n",buffer);
            free(buffer);
        }
        newRoute = nil;
    } 
}

/**
 * Initialize the audioUnit and allocate our own temporary buffer.
 * The temporary buffer will hold the latest data coming in from the microphone,
 * and will be copied to the output when this is requested.
 */
- (id) init {
    self = [super init];
    OSStatus status;

    // Initialize and configure the audio session
    AudioSessionInitialize(NULL, NULL, NULL, self);

    UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
    AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, 
        sizeof(audioCategory), &audioCategory);
    AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, 
        propListener, self);

    Float32 preferredBufferSize = .020;
    AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, 
        sizeof(preferredBufferSize), &preferredBufferSize);

    AudioSessionSetActive(true);

    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = 
        kAudioUnitSubType_VoiceProcessingIO/*kAudioUnitSubType_RemoteIO*/;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);
    checkStatus(status);

    // Enable IO for recording
    UInt32 flag = 1;
    status = AudioUnitSetProperty(audioUnit,
        kAudioOutputUnitProperty_EnableIO, 
        kAudioUnitScope_Input, 
        kInputBus,
        &flag, 
        sizeof(flag));
        checkStatus(status);

    // Enable IO for playback
    flag = 1;
    status = AudioUnitSetProperty(audioUnit, 
        kAudioOutputUnitProperty_EnableIO, 
        kAudioUnitScope_Output, 
        kOutputBus,
        &flag, 
        sizeof(flag));

    checkStatus(status);

    // Describe format
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate = 8000.00;
    //audioFormat.mSampleRate = 44100.00;
    audioFormat.mFormatID = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags = 
        kAudioFormatFlagsCanonical/* kAudioFormatFlagIsSignedInteger | 
        kAudioFormatFlagIsPacked*/;
    audioFormat.mFramesPerPacket = 1;
    audioFormat.mChannelsPerFrame = 1;
    audioFormat.mBitsPerChannel = 16;
    audioFormat.mBytesPerPacket = 2;
    audioFormat.mBytesPerFrame = 2;

    // Apply format
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_StreamFormat, 
        kAudioUnitScope_Output, 
        kInputBus, 
        &audioFormat, 
        sizeof(audioFormat));

    checkStatus(status);
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_StreamFormat, 
        kAudioUnitScope_Input, 
        kOutputBus, 
        &audioFormat, 
        sizeof(audioFormat));

    checkStatus(status);


    // Set input callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
        AudioOutputUnitProperty_SetInputCallback, 
        kAudioUnitScope_Global, 
        kInputBus, 
        &callbackStruct, 
        sizeof(callbackStruct));

    checkStatus(status);
    // Set output callback
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
        kAudioUnitProperty_SetRenderCallback, 
        kAudioUnitScope_Global, 
        kOutputBus,
        &callbackStruct, 
        sizeof(callbackStruct));

    checkStatus(status);

    // Disable buffer allocation for the recorder (optional - do this if we want to 
    // pass in our own)

    flag = 0;
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_ShouldAllocateBuffer,
        kAudioUnitScope_Output, 
        kInputBus,
        &flag, 
        sizeof(flag)); 


    flag = 0;
    status = AudioUnitSetProperty(audioUnit,
    kAudioUnitProperty_ShouldAllocateBuffer, 
        kAudioUnitScope_Output,
        kOutputBus,
        &flag,
        sizeof(flag));

    // Allocate our own buffers (1 channel, 16 bits per sample, thus 16 bits per 
    // frame, thus 2 bytes per frame).
    // Practice learns the buffers used contain 512 frames,
    // if this changes it will be fixed in processAudio.
    tempBuffer.mNumberChannels = 1;
    tempBuffer.mDataByteSize = 512 * 2;
    tempBuffer.mData = malloc( 512 * 2 );

    // Initialise
    status = AudioUnitInitialize(audioUnit);
    checkStatus(status);

    return self;
}

/**
 * Start the audioUnit. This means data will be provided from
 * the microphone, and requested for feeding to the speakers, by
 * use of the provided callbacks.
 */
- (void) start {
    OSStatus status = AudioOutputUnitStart(audioUnit);
    checkStatus(status);
}

/**
 * Stop the audioUnit
 */
- (void) stop {
    OSStatus status = AudioOutputUnitStop(audioUnit);
    checkStatus(status);
}

/**
 * Change this function to decide what is done with incoming
 * audio data from the microphone.
 * Right now we copy it to our own temporary buffer.
 */
- (void) processAudio: (AudioBufferList*) bufferList {
    AudioBuffer sourceBuffer = bufferList->mBuffers[0];

    // fix tempBuffer size if it's the wrong size
    if (tempBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {
        free(tempBuffer.mData);
        tempBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
        tempBuffer.mData = malloc(sourceBuffer.mDataByteSize);
    }

    // copy incoming audio data to temporary buffer
    memcpy(tempBuffer.mData, bufferList->mBuffers[0].mData, 
        bufferList->mBuffers[0].mDataByteSize);
    usleep(1000000); // <- TO REPRODUCE THE ERROR, CONCURRENCY MORE LIKELY

}

/**
 * Clean up.
 */
- (void) dealloc {
    [super dealloc];
    AudioUnitUninitialize(audioUnit);
    free(tempBuffer.mData);
}

@end
W Dyson
  • 4,604
  • 4
  • 40
  • 68
Angel Martin
  • 321
  • 2
  • 8
  • 1
    Did you try adding a breakpoint at `malloc_error_break` - it should give you the pointer that's being freed twice. – maroux May 07 '13 at 08:31
  • 2
    Are you leaking `char *buffer = (char *)malloc(maxSize);`? You don't need any of that code btw - `CFStringRef` is a toll-free bridge to `NSString`, so you can simply typecast `newRoute` to a `NSString*` and use `NSString` methods. – maroux May 07 '13 at 08:35
  • 1
    I forgot to include in the test code I provided the fixes to the issues that you @Mar0ux highlight, but in my app they were already fixed (`free(buffer);` and `newRoute = nil;`).Not always the error is _malloc: * error for object 0x....: pointer being freed was not allocated_ , other times the error is EXC_BAD_ACCESS in a memcpy. – Angel Martin May 08 '13 at 06:56
  • I have also tried to remove the callback `kAudioOutputUnitProperty_SetInputCallback` keeping just the callback `kAudioUnitProperty_SetRenderCallback` and moving the recording callback code at the end of the playing callback code. Moreover according to the comment _// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)_ i removed the `kAudioUnitProperty_ShouldAllocateBuffer` property for `kInputBus` with the same result – Angel Martin May 09 '13 at 10:56
  • I made a deep change in my SIP App. I removed the ffmpeg-MPEG4 video encoding to get a considerable processing load saving. However, the problem remains. Any tip/clue/solution? – Angel Martin May 13 '13 at 11:06
  • I have also explored the `@synchronized(iosAudio)` possibilities to avoid "potential" concurrency of a "hypothetical" flush buffers from the AudioUnit, with the same result (crash). I mean to avoid the possibility of iOS Audio thread burst calling to playing/recording callbacks. – Angel Martin May 13 '13 at 14:59
  • I saw this a couple of years ago, but I forget any of the details. But it wasn't anything exotic -- just an uninitialized field or some such. – Hot Licks May 13 '13 at 17:32
  • I'd be pleased if you you @HotLicks could provide me a bit of extra information or details about that. I have already tried lots of things without success. – Angel Martin May 22 '13 at 06:51
  • I think it revolved around the interface you use to query the audio config and select your input -- there were a couple of things being done (in the code I didn't originally write) that were not following the API spec. But like I said, it's been a couple of years. – Hot Licks May 22 '13 at 11:36

1 Answers1

8

According to my tests, the line that triggers the SEGV error is ultimately

AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,
                                    sizeof (audioRouteOverride),&audioRouteOverride);

Changing the properties of an AudioUnit chain mid-flight always is tricky, but if you stop the AudioUnit before rerouting, and start it again, it finishes using up all the buffers it has stored and then carries on with the new parameters.

Would that be acceptable, or do you need less of a gap between the change of route and the restart of the recording?

What I did was:

void propListener(void *inClientData,
              AudioSessionPropertyID inID,
              UInt32 inDataSize,
              const void *inData) {

[iosAudio stop];
// ...

[iosAudio start];
}

No more crash on my iPhone 5 (your mileage may vary with different hardware)

The most logical explanation I have for that behavior, somewhat supported by these tests, is that the render pipe is asynchronous. If you take forever to manipulate your buffers, they just stay queued. But if you change the settings of the AudioUnit, you trigger a mass reset in the render queue with unknown side effects. Trouble is, these changes are synchronous, which affect in a retroactive way all the asynchronous calls waiting patiently for their turn.

if you don't care about the missed samples, you can do something like:

static BOOL isStopped = NO;
static OSStatus recordingCallback(void *inRefCon, //...
{
  if(isStopped) {
    NSLog(@"Stopped, ignoring");
    return noErr;
  }
  // ...
}

static OSStatus playbackCallback(void *inRefCon, //...
{
  if(isStopped) {
    NSLog(@"Stopped, ignoring");
    return noErr;
  }
  // ...
}

// ...

/**
 * Start the audioUnit. This means data will be provided from
 * the microphone, and requested for feeding to the speakers, by
 * use of the provided callbacks.
 */
- (void) start {
    OSStatus status = AudioOutputUnitStart(_audioUnit);
    checkStatus(status);

    isStopped = NO;
}

/**
 * Stop the audioUnit
 */
- (void) stop {

    isStopped = YES;

    OSStatus status = AudioOutputUnitStop(_audioUnit);
    checkStatus(status);
}

// ...
krug
  • 216
  • 1
  • 2
  • Thanks @krug, your proposal fix the issue of this code. However, in my app the recording and playing callbacks are called a 3-4 times after plug/unplug activity and before `propListener` callback is called. Inside these recording and playing callbacks some external structures pointers are continuously modified to bad values until the `propListener` callback is called, when the modification of those pointers stop, but the damage is already perpetrated. So at the end of the communication produces later _error pointer being freed was not allocated_ when the code destroy the external structures. – Angel Martin May 14 '13 at 09:24
  • 1
    A solution might be to have a mutex lock the process just after the stopping, to be unlocked once all the samples have been processed? – krug May 14 '13 at 09:36
  • Hi @krug, I have already tried to protect the recording and playing callbacks with a `@synchronized(iosAudio){ ... }` (like a mutex), I placed the sync block in the recording and playing callbacks once your first answer changes are included with the same result. – Angel Martin May 14 '13 at 10:46
  • yes, but your AudioUnit doesn't change. Therefore the @synchronize won't lock anything. – krug May 14 '13 at 16:36
  • I added a way to ignore the samples occurring during the switch, if it helps. – krug May 14 '13 at 17:20
  • I checked the mutex option without success. The problem is that the code you @krug suggest based on the `isStopped` semaphore will never lock my problematic case. Because after plug/unplug `playbackCallback` and `recordingCallback` are called 3-4 times more before `propListener` and its `stop` are called. – Angel Martin May 20 '13 at 06:41