0

In a Chrome extension, is it possible to create an waveform or spectrogram image (or canvas) element based on only a .wav file URL of an <audio> tag?

I assume that solving this will require multiple steps:

  1. Load the sound file data.
  2. Convert the data into a format that can be used for drawing, possibly an array of sample values, similar to the .dat format of soxformat.
  3. Generate an image or draw on a canvas.

Please provide JavaScript code to turn this:

<audio src="http://goo.gl/hWyNYu" controls />

into this:

waveform made with gnuplot with Chrome audio player as a mockup

This needs to be done without having to play back the audio, as opposed to existing solutions and without using server-side solutions.

Rather than being a question about a specific step in the process, this question seeks a complete answer with complete code so anybody could test it first and understand it later.

Community
  • 1
  • 1
qubodup
  • 8,687
  • 5
  • 37
  • 45

1 Answers1

4

Well, theoretically, you should be able to use those existing client-side solutions without playing back the audio, using an OfflineAudioContext. Unfortunately, those solutions both use ScriptProcessorNode, and from what I've heard, existing implementations are broken when using ScriptProcessorNode in an OfflineAudioContext, and not likely to be fixed. I suspect AnalyserNode may be broken in OfflineAudioContext too.

It would probably work to use an OfflineAudioContext to just "play back" the entire sound file, then draw your canvas based on the output buffer that is created.

Or you could use a regular AudioContext, but make sure the output isn't audible (say, by piping the sound through a gain node with gain of zero). This is really ugly, slow, and would interfere with any other Web Audio API usage on your page.

You could also try fetching the entire file just as a binary blob into an ArrayBuffer, and parse it yourself. The WAV file format is not all that complicated (or you might be able to find open source code to do this). If you wanted to handle compressed formats like MP3 this way, you would definitely not want to write the decoder from scratch.

EDIT:

I think the Web Audio-based solutions above are too complicated. You have to set up an AudioBuffer and decode the audio into it using decodeAudioData anyway. Once you've done that, there is no need to even create an AudioBufferSourceNode. You can just get the audio data directly from the AudioBuffer by calling getChannelData on it.

aldel
  • 6,489
  • 1
  • 27
  • 32