1

I have to draw a waveform for an audio file (CMK.mp3) in my application. For this I have tried this Solution

As this solution is using AVAssetreader, which is taking two much time to display the waveform.

Can anyone please help, is there any other way to display the waveform quickly? Thanks

Community
  • 1
  • 1
iPhoneDv
  • 1,969
  • 1
  • 19
  • 34

1 Answers1

0

AVAssetReader is the only way to read an AVAsset so there is no way around that. You will want to tune the code to process it without incurring unwanted overhead. I have not tried that code yet but I intend on using it to build a sample project to share on GitHub once I have the time, hopefully soon.

My approach to tune it will be to do the following:

  1. Eliminate all Objective-C method calls and use C only instead
  2. Move all work to a secondary queue off the main queue and use a block to call back one finished

One obstacle with rendering a waveform is you cannot have more than one AVAssetReader running at a time, at least the last time I tried. (It may have changed with iOS 6 possibly) A new reader cancels the other and that interrupts playback, so you need to do your work in sequence. I do that with queues.

In an audio app that I built it reads the CMSampleBufferRef into a CMBufferQueueRef which can hold multiple sample buffers. (see copyNextSampleBuffer on AVAssetReader) You can configure the queue to provide you with enough time to process a waveform after an AVAssetReader finishes reading an asset so that the current playback does not exhaust the contents of the CMBufferQueueRef before you start reading more buffers into it for the next track. That will be my approach when I attempt it. I just have to be careful that I do not use too much memory by making the buffer too big or making the buffer so big that it causes issues with playback. I just do not know how long it will take to process the waveform and I will test it on my older iPods and iPhone 4 before I try it on my iPhone 5 to see if they all perform well.

Be sure to stay as close to C as possible. Calls to Objective-C resources during this processing will incur potential thread switching and other run-time overhead costs which are significant enough to be noticeable. You will want to avoid that. What I may do is set up Key-Value Observing (KVO) to trigger the AVAssetReader to start the next task quickly so that I can maintain gapless playback between tracks.

Once I start my audio experiments I will put them on GitHub. I've created a repository where I will do this work. If you are interested you can "watch" that repo so you will know when I start committing updates to it.

https://github.com/brennanMKE/Audio

Brennan
  • 11,546
  • 16
  • 64
  • 86
  • By the way, I recently read a book by Bill Dudney which helped me understand the Core Audio and AV Foundation code much more. It's called All the C You Need to Know. Now I know why CMSampleBufferRef has Ref at the end and how to use it effectively. https://itunes.apple.com/us/book/all-the-c-you-need-to-know/id581989356?mt=11 – Brennan Jan 07 '13 at 16:27