5

I wanted to experiment with something outside my comfort zone and prototype a "online radio app".

I then fell into the rabbit hole of WebRTC streaming, media servers, WebRTC gateways, P2P network graphs...

It seems WebRTC is not suited for these kinds of tasks. It is limited to 10 peers in most browsers. Scaling WebRTC also requires a lot of work for large numbers of viewers. Ex: WebRTC - scalable live stream broadcasting / multicasting

Then it occurred to me that simple live audio streams without JavaScript have existed for a while, in this form:

http://stream.radioreklama.bg/radio1.opus

The client for such streams can simply be simple html <audio> tags.

Now all I have to do is to create this "magic" url where a live audio stream is available. Is this possible to do using Node.js?

The missing parts to create my prototype are:

1: Send a "live" audio stream from a client (broadcaster) to the server (using getUserMedia and socket.io).

2: Pipe this audio stream to a "/stream.mp3" URL with the proper encodings.

If feasible, I think this would be an interesting approach to solve the large-scale one-to-many streaming problem for audio, but maybe I'm missing some core information.

Ideal client:

import io from 'socket.io-client';
const socket = io.connect('//localhost:8888');

// Broadcasting code
navigator.mediaDevices
.getUserMedia({ audio: true, video: false })
.then(userMediaStream => {

  const mediaRecorder = new MediaRecorder(userMediaStream, {mimeType: 'audio/webm'});

  mediaRecorder.ondataavailable = event => {
    socket.emit('sound-blob', event.data);
  }

  mediaRecorder.start();
})



// Could be just a static html file
const App = () => (
  <div>
    <h1>Audio streaming client</h1>
    <audio>
      <source src="http://example.com/stream.mp3" type="audio/webm" />
    </audio>
  </div>
)

Ideal server:

const app = express();
const io = require('socket.io').listen(8888);
const stream = require('stream');

const audioStream = new stream.Readable();

var app = express();

app.get('/stream.mp3', (req, res) => {
  audioStream.pipe(res);
})


io.on('connection', (socket) => {
  socket.on('sound-blob', (blob) => {
    audioStream.push(blob);
  })
})


server = http.createServer(app);
server.listen(8080);

Right now, the ondataavailable event is only fired when the stream ends, but I think it would be possible to split the recording into chunks and stream it in real time. I'm not sure of the appropriate approach for this.

Once a stream is sent to the server, there will probably be some encoding / converting to do before piping it to the /stream.mp3 endpoint. I don't know if this is necessary either.

Would this be even possible to do? Any pitfalls I'm not seeing?

Thanks for sharing your thoughts!

0 Answers0