29

So I'm trying to use Web Audio API to decode & play MP3 file chunks streamed to the browser using Node.js & Socket.IO.

Is my only option, in this context, to create a new AudioBufferSourceNode for each audio data chunk received or is it possible to create a single AudioBufferSourceNode for all chunks and simply append the new audio data to the end of source node's buffer attribute?

Currently this is how I'm receiving my MP3 chunks, decoding them and scheduling them for playback. I have already verified that each chunk being received is a 'valid MP3 chunk' and is being successfully decoded by the Web Audio API.

audioContext = new AudioContext();
startTime = 0;

socket.on('chunk_received', function(chunk) {
    audioContext.decodeAudioData(toArrayBuffer(data.audio), function(buffer) {
        var source = audioContext.createBufferSource();
        source.buffer = buffer;
        source.connect(audioContext.destination);

        source.start(startTime);
        startTime += buffer.duration;
    });
});

Any advice or insight into how best to 'update' Web Audio API playback with new audio data would be greatly appreciated.

Samuel Jenks
  • 1,137
  • 4
  • 21
  • 34
Jonathan Byrne
  • 304
  • 1
  • 3
  • 7
  • I have some code I'm working on that's nearly identical to what you're doing. Please update this thread if you ever finish. I will as well. – Brad.Smith Nov 27 '13 at 03:47
  • 4
    Jonathan, can you post your server code please? – Brad.Smith Dec 03 '13 at 16:07
  • I'm trying to do the same thing, but I have problems with the mp3 chunks, you have an example of how make a correct stream of an mp3 audio with socket.io? – cmarrero01 Feb 28 '15 at 02:06
  • Why use Web Sockets? You can use just regular HTTP and use a simple audio element. – Brad Dec 14 '21 at 00:12

3 Answers3

8

Currently, decodeAudioData() requires complete files and cannot provide chunk-based decoding on incomplete files. The next version of the Web Audio API should provide this feature: https://github.com/WebAudio/web-audio-api/issues/337

Meanwhile, I've began writing examples for decoding audio in chunks until the new API version is available.

https://github.com/AnthumChris/fetch-stream-audio

anthumchris
  • 8,245
  • 2
  • 28
  • 53
6

No, you can't reuse an AudioBufferSourceNode, and you cant push onto an AudioBuffer. Their lengths are immutable.

This article (http://www.html5rocks.com/en/tutorials/audio/scheduling/) has some good information about scheduling with the Web Audio API. But you're on the right track.

Kevin Ennis
  • 14,226
  • 2
  • 43
  • 44
1

I see at least 2 possible approaches.

  1. Setting up a scriptProcessorNode, which will feed queue of received & decoded data to realtime flow of web-audio.

  2. Exploiting the property of audioBufferSource.loop - updating audioBuffer’s content depending on the audio time.

Both approaches are implemented in https://github.com/audio-lab/web-audio-stream. You can technically use that to feed received data to web-audio.

dy_
  • 6,462
  • 5
  • 24
  • 30