0

I'm building a piece of hardware that sends data into the headphone jack, and I need a way to record short snippets and analyze it quickly (hopefully without having to save the file and reopen for analysis). I have played around with fft and the accelerate frameworks, though I don't think it's exactly what I'm looking for.

I'm wondering mostly if something like this is feasible: record a ~30ms snippet of audio, and then grab an array of floats representing the voltage/(db levels?) throughout the recording. Then I could interpret the data depending on the levels at different ms through the recording. Would something like AVAudioRecorder be able to record at a resolution which I could examine every ms in the recording? Since this will be a repeating process, I'm hoping to keep the cpu down as well.

Community
  • 1
  • 1
colt
  • 89
  • 1
  • 9

1 Answers1

0

This is totally doable. Use AudioSession with AudioUnits.

Mark
  • 2,666
  • 3
  • 25
  • 29
  • Cool, thanks for the AudioUnits suggestion. Do you think then I would go with the "I/O with a Render Callback Function" (found [here](http://developer.apple.com/library/ios/#DOCUMENTATION/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/ConstructingAudioUnitApps/ConstructingAudioUnitApps.html#//apple_ref/doc/uid/TP40009492-CH16-SW1)) and then address the audio in a render callback? Also, what does the data look like coming from the audio session? – colt Jun 28 '12 at 22:01
  • Yes, a render callback is the way to go. What format the data is in depends on how you set up the audiostream. Most likely you will get the data in sets of 256 SInt16 with max and min values equaling #define sn16_MAX_SAMPLE_VALUE 32767 #define sn16_MIN_SAMPLE_VALUE -32768 respectively – Mark Jul 05 '12 at 19:23