I am rendering audio in the browser (mobile/desktop) which arrives over the network (via web sockets) as a potentially endless stream of successive audio buffers (Float32Array typed arrays) and to minimize chance of playback starvation I'd like to queue up multiple buffers prior to start of audio rendering. Does Web Audio API support the notion of queuing up multiple buffers (like OpenAL) to be rendered sequentially in a streaming fashion ? I am not speaking of simultaneous rendering of multiple buffers. Before I roll my own ...
4 Answers
This is something you need to handle yourself. As far as I know, there is no way to queue up multiple buffers for the API to play back on its own.
You can use a ScriptProcessorNode to implement yourself.

- 159,648
- 54
- 349
- 530
The web Audio API does not support a queue of buffers. Instead you can concatenate the buffer yourself (lets say you have buffer1 and buffer2):
var tempBuffer = context.createBuffer(buffer1.numberOfChannels, buffer1.length+buffer2.length, buffer1.sampleRate);
//now we need to concatenate the buffers for each channel
for(var i=0;i<buffer1.numberOfChannels;i++){
var channel = tempBuffer.getChannelData(i);
channel.set(buffer1.getChannelData(i), 0); //this puts the data of buffer1 in the channel var, starting at offset 0
channel.set(buffer2.getChannelData(i), buffer1.length); //this starts at the offset of buffer1.length, so it is exactly placed after buffer1
}
//tempBuffer now contains your 2 buffers concatenated
As you are talking about a stream, that is a bit hard if you only have one incoming stream. If you use a scriptProcessor
to cut out the empty places, you have to either put something there or just remove the silence, but then the retuned sample is to short and you stil get gaps. A solution for that is a buffer outside the scriptProcessor. Every time the scriptProcessor fires you can check for silence, cut that out and put it on the end of the buffer outside it, and not return anyting. Then after you have x seconds of audio in that buffer you play it. The only downside is that the x seconds offset is getting smaller every time you remove something, and it will come to a point where the buffer is getting so small there is no audio left to play and it might starve. Besides that I am not sure about putting audio in a buffer while it is playing. You should try try that out first.

- 4,703
- 3
- 23
- 47
-
Depends on how you receive the stream, I have to see your code though. – MarijnS95 May 13 '14 at 06:28
-
@brad hard do see on a phone that it wasn't the OP that asked this. It pretty much depends on how he receives the stream. If he is receiving it as a `MediaStream` through webRTC he is not able to route it through the audio API (not yet implemented). If he only has one stream it is hard, as this means getting a buffer from it with the scriptProcessorNode, then check wether there are gaps and then resend it without gaps, but how is he going to fill te gaps? (if he would just cut out the gaps, then the returned sample is to short and there is still a little place with no audio, but at the end). – MarijnS95 May 13 '14 at 09:02
-
What I'm asking really is that you won't be able to concatenate buffers when the buffers have already started playback, right? My last experiments with this were not fruitful, but the Web Audio API moves very quickly which is why I am asking. – Brad May 13 '14 at 12:53
-
@brad that is what I added in the original answer, that I am not sure if it would work (in theory it could). Sadly we do not have any clearance about the received 'endless stream of successive audio buffers'. – MarijnS95 May 13 '14 at 16:50
This is a problem that screams for Media Source Extension: https://dvcs.w3.org/hg/html-media/raw-file/tip/media-source/media-source.html.
It's available in Chrome, IE11, and soon in Firefox.
If you want to do it using web audio nonetheless, check out this project, and this other code.

- 1,515
- 8
- 11
I rolled my own implementation ... browser initiates a websocket connection to nodejs server which responds by sending a stream of typed array buffers back to the browser ... web worker in browser manages all such websocket traffic and populates a Transferable Object shared buffer which is browser side accessible by Web Audio API event loop ... as event loop consumes this circular queue of buffers it triggers webworker to request more from server side ... it worked when I finished it using nodejs 0.10.x however modern nodejs breaks it see source here

- 21,988
- 13
- 81
- 109

- 26,870
- 12
- 93
- 104