3

I'm trying to set up a live audio streaming system where a client will broadcast the audio from his microphone (accessed with getUserMedia) to one or more peers. To do so, chunks of the audio stream are sent through a WebSocket to a server, which will then relay this information to all the peers connected to the WebSocket.

My main problem comes from how to play chunks of data recieved by the peers on a website.

First, that's how I send the chunks of audio data on my client broadcasting JS script :

var context = new AudioContext();
var audioStream = context.createMediaStreamSource(stream);
// Create a processor node of buffer size, with one input channel, and one output channel
var node = context.createScriptProcessor(2048, 1, 1);
// listen to the audio data, and record into the buffer
node.onaudioprocess = function(e){
        var inputData = e.inputBuffer.getChannelData(0);
        ws.send(JSON.stringify({sound: _arrayBufferToBase64(convertoFloat32ToInt16(inputData))}));
}
audioStream.connect(node);
node.connect(context.destination);

arrayBufferToBase64 and convertoFloat32ToInt16 are methods that I use to send respectively the stream in base64 format, and to convert the inputData to Int16, instead of that fancy Float32 representation (I used methods found on SO, supposed to work).

Then, after the data has gone through the WebSocket, I collect the data in another script, which will be executed on the website of each peer :

var audioCtx = new AudioContext();
var arrayBuffer = _base64ToArrayBuffer(mediaJSON.sound);
audioCtx.decodeAudioData(arrayBuffer, function(buffer) {
     playSound(buffer);
});

I also need to convert the base64 data recieved to an ArrayBuffer, which will then be decoded by decodedAudioData to produce an audioBuffer of type AudioBuffer. The playSound function is as simple as this :

function playSound(arrBuff) {
    var src = audioCtx.createBufferSource();
    src.buffer = arrBuff;
    src.looping = false;
    src.connect(audioCtx.destination);
    src.start();
}

But for some reasons, I can't get any sound to play on this script. I'm pretty sure the broadcasting script is correct, but not the "listener" script. Can anyone help me on this ?

Thanks !

Jonathan Taws
  • 1,168
  • 11
  • 24
  • Why not use WebRTC and let the browser do all the work for you? How many peers? See also: http://stackoverflow.com/a/20850467/362536 – Brad Oct 08 '14 at 20:24
  • @Brad The current architecture I'm using is based on WebSocket for all the communication between the server and the clients, and I can't change it to be WebRTC peer-to-peer based. There could a lot of peers connected at the same time. I've already checked that link, but my problem comes from the way to play audio on the clients website while receiving the new chunks of data. – Jonathan Taws Oct 08 '14 at 20:42
  • Are you sure the scoping of audioBuffer is correct? If that's not global scope I'm not sure it would be. – cwilso Oct 08 '14 at 20:44
  • @cwilso You mean that audioBuffer shouldn't be declared inside a function ? Wouldn't the scope be correct if it's declared before decodeData ? – Jonathan Taws Oct 08 '14 at 20:48
  • @Hawknight Don't use base64, use binary data. It is much more efficient. And then, you will need to buffer data yourself and play it back using a script node. You can't simply play back the buffer as it arrives. It will cut out and be unpredictable since the timing will always vary. – Brad Oct 08 '14 at 20:50
  • @Brad So you suggest sending the data without converting it to base64 (raw data) ? But I don't really understand how I should proceed on the "listener" side to then play the audio data I've received from the WebSocket – Jonathan Taws Oct 09 '14 at 11:44
  • @Hawknight Yes, skipping the base64 encoding is much more efficient. For playback, you need to set up your own buffers, and create a scriptnode with the Web Audio API and the control its output based on the data you have buffered. In other words, you're you have to basically reinvent the entire streaming wheel, with buffer control and decoding and all that. – Brad Oct 09 '14 at 15:56
  • @Brad I've started writing the playback code, but I don't see why I need to "reinvent the entire streaming wheel". Would you have any example of how to playback an audio buffer ? From what I've seen, I mainly need to use an AudioBuffer, which is what I am sending through WebSocket. Won't the playSound function be enough ? – Jonathan Taws Oct 12 '14 at 19:46
  • @Hawknight No, playSound is absolutely not enough. Think about this... how will you handle the timing of playback? You have to be sample-accurate and the only way to get that done is with a script node. You can't just play stuff back as it comes in. You might be off as much as a second or more, and that's in normal cases. You must buffer the audio on your own. You must play it back on your own. You're creating your own transport for the audio, so there is nothing you can use that's built-in to handle the playback. – Brad Oct 12 '14 at 19:48
  • @Hawknight I'd like to do the same thing. I'll be transmitting stethoscope sounds and need it to be lossless. Just curious to know if you got it working? If so would you be willing to post your solution here? Thanks! – Randy Findley Feb 12 '15 at 19:30
  • @RandyFindley I would have liked to help you but we decided to not carry on the development of real-time audio feedback, it is really cumbersome to implement, as you can see from the comments. – Jonathan Taws Feb 12 '15 at 21:00
  • 1
    I know this is an old question, but I can spot an error... `playSound` is being called before `audioBuffer` is set. You'd need to call it from inside the callback to ensure it's set before using it. You don't even need `audioBuffer`. You can just use `playSound(buffer)`. – tjhorner Aug 16 '15 at 11:33
  • Indeed, it can be a potential error, I updated the code with your correction ! Thanks. – Jonathan Taws Aug 17 '15 at 13:19

0 Answers0