6

I'm using a webRTC application with a simple-peer npm package.

I want to know what is the purpose of all these topics (SFU., Janus, mediasoup or medooze.) and how can I integrate them to make my application performance greater?

PS: I'm using a node.js server the bundle the requesting and signaling between peers on my architecture. are those servers and services required to make my application performance well?

Hope I could find an answer here ...

CTMA
  • 63
  • 1
  • 9
  • Does this answer your question? [Understanding SFU's, TURN servers in WebRTC](https://stackoverflow.com/questions/61287054/understanding-sfus-turn-servers-in-webrtc) – Sean DuBois May 09 '20 at 21:41
  • no, cuz i already using a STUN/TURN and i want to know the purpose of those servers i mentioned above and if they are required to the performance of my system. – CTMA May 10 '20 at 15:36

3 Answers3

10

With regular webrtc every peer needs to send and receive its data seperately to every other peer. So let's say there are 10 peers that do a video chat. Then every peer has to send their video 9 times simultaneously and also receive 9. Every peer would use a big amount of upload bandwidth which they usually don't have.

SFUs solve this problem by every peer sending only one stream to a mediaserver and letting that server do all the routing to the other peers. This way every peer only sends 1 stream and receives 9. The download max download bandwidth is usually higher than the upload bandwidth.

There is also something called simulcast which automatically switches the quality of depending on the available bandwidth of the peer. I have been able to achieve this with mediasoup.

Dirk V
  • 1,373
  • 12
  • 24
  • Thank you for your replay, now I'm using my server-side with node.js, and as you said if I have 10 peers, I have one peer as leader of the whole orchestra, and that peer will send the streaming to all others and all other will send one and receive one. how can i use the SFU on my case? – CTMA May 29 '20 at 14:40
  • @Lakkini "all others will send one and receive one" What do you mean? So if you need just one leader that is the only one broadcasting, then he will send 1 stream, and all receivers will also receive just one stream. If you want a call an a room between everyone, then every participant will send 1 stream and receive a stream for every other participant – Dirk V May 29 '20 at 16:33
  • yes like broadcasting and to know more its an education app when teachers will see all students and students see just the teacher and not their friends too. and all that will respect the rooms concepts ( i made that concept with sockets.io) – CTMA May 30 '20 at 17:03
  • @Lakkini Here I have a github project with an example of multi-user video/audio conferencing using mediasoup [github repo](https://github.com/Dirvann/mediasoup-sfu-webrtc-video-rooms). You can take it as an example and apply it to your project. – Dirk V May 31 '20 at 00:24
  • Thank you so mcuh i will get a look on it – CTMA Jun 01 '20 at 19:40
  • @Lakkini You're welcome. If it answered your question then you can mark the response as answered. – Dirk V Jun 01 '20 at 23:27
  • let me check first please, and thank you for your interaction – CTMA Jun 02 '20 at 11:15
  • I already go throw your example but I've already one problem! when a produce the video from the second participant, both participant couldn't consume the video? mean the video doesn't appear on the 2 sides! why that? thank you. – CTMA Sep 17 '20 at 11:23
  • @Lakkini That has probably something to do with how you set this up. And the right configuration. – Dirk V Sep 17 '20 at 12:02
  • i already take the same one with you, i just want to ask about the config.js cuz i don't change the data there when you said replace the ip@ by a public one. – CTMA Sep 17 '20 at 12:09
  • I you don't change it, you can then only use it in localhost on different browser tabs. – Dirk V Sep 17 '20 at 12:23
  • I'm using a remote ip@ so I just changed the **#announcedIp** to my remote address and all works as well!! thank you so much. I just want to ask about the **codec** in the configuration file, is that codecs will work with all kind of laptop devices? – CTMA Sep 17 '20 at 12:54
  • Yes it works with practically all devices. pretty standard codec – Dirk V Sep 17 '20 at 13:27
  • okay, I'm gonna try it. is that will support 500 consumers at a time with a mediasoup extension ? – CTMA Sep 19 '20 at 14:11
  • 1
    500 consumers are the guidelines, but depends on the cpu/ram of the pc and such – Dirk V Sep 21 '20 at 15:03
  • after many tests about the library i was noticing an error message in the backend that said: **no more available ports [transport: UDP, IP:’0.0.0.0’, numAttempt:101]**, I'm really confusing about that! is that an architecture problem that means i achieve the maximum consumers and producers number ? or its another error? – CTMA Oct 05 '20 at 14:49
  • Do you have more than 100 connections? – Dirk V Oct 05 '20 at 21:39
  • Do you mean the number of consumers + producers > 100 in total? – CTMA Oct 07 '20 at 14:30
  • @Lakkini I mean the amount of transports – Dirk V Oct 07 '20 at 19:49
2

The WebRTC SFU server can:

  • Forward: Only need to send 1 stream to SFU, which forwards to other peers in the room.
  • Simulcast: If the stream is simulcast stream, SFU can forward streams with different bitrate like MCU, but with less CPU cost without transcoding.
  • Protocol Converter: SFU can also convert WebRTC to other protocols, like publish to YouTube by RTMP.
  • DVR: Record the WebRTC stream as VoD file such as MP4 file.
  • Network Quality: SFU provides better network quality, especially when P2P is not able to enabled.
  • Firewall Traverse: For peers behind enterprise firewall, the UDP might not avaialbe, SFU can use HTTP(TCP/80) or HTTPS(TCP/443) port.

Forward

The default model of WebRTC is P2P like this:

PeerA(WebRTC/Chrome) --------> PeerB(WebRTC/Chrome)
PeerB(WebRTC/Chrome) --------> PeerA(WebRTC/Chrome)

If you got three participants in a room:

PeerA(WebRTC/Chrome) --------> PeerB(WebRTC/Chrome)
PeerA(WebRTC/Chrome) --------> PeerC(WebRTC/Chrome)
PeerB(WebRTC/Chrome) --------> PeerA(WebRTC/Chrome)
PeerB(WebRTC/Chrome) --------> PeerC(WebRTC/Chrome)
PeerC(WebRTC/Chrome) --------> PeerA(WebRTC/Chrome)
PeerC(WebRTC/Chrome) --------> PeerB(WebRTC/Chrome)

For P2P model, each peer need to send N-1 streams and receive N-1 streams from other peers, which requires lots of upload bandwidth.

SFU can forward the stream to other peers, like this:

PeerA(WebRTC/Chrome) ---> SFU --+--> PeerB(WebRTC/Chrome)
                                +--> PeerC(WebRTC/Chrome)
PeerB(WebRTC/Chrome) ---> SFU --+--> PeerA(WebRTC/Chrome)
                                +--> PeerC(WebRTC/Chrome)
PeerC(WebRTC/Chrome) ---> SFU --+--> PeerA(WebRTC/Chrome)
                                +--> PeerB(WebRTC/Chrome)

For SFU model, each peer only need to send 1 stream and receive N-1 streams, so this model is better than P2P, especially when there are more peers in a room.

Simulcast

Because the network of peers is different, so SFU can use simulcast to send diffent bitrate to peers, it works like this:

PeerA(WebRTC/Chrome) --1Mbps-> SFU --+--1Mbps----> PeerB(WebRTC/Chrome)
                                     +--500Kbps--> PeerC(WebRTC/Chrome)

Because the network of PeerC is worse, so SFU send stream in bitrate 500Kbps.

Please note that this requires PeerA to use AV1 codec, the H.264 is not supported by default, so it's not a perfect solution.

And it's also complex and PeerC might doesn't want low bitrate stream, but it accepts larger latency, so this solution does not always work.

Note: Simulcast is not the same to MCU, which requires a lots of CPU cost for transcoding. MCU convert the streams in a room to 1 stream for a peer to recieve, so it's been used in some scenario such as for SIP embeded device, which only recieve 1 stream with video and audio.

There are lots of SFU servers can do this, for example, SRS, Mediasoup, Janus and Licode.

Note: Right not at 2023.02, SRS simulcast feature has not been merged to develop, it's in a feature branch.

Protocol Converter

Sometimes you want to covert WebRTC to live streaming, for example, to open a web page and publish camera stream to YouTube.

How to do that by SFU? It works like this:

Chrome --WebRTC------> SFU ---RTMP------> YouTube/Twitch/TikTok
        (H.264+OPUS)        (H.264+AAC)

In this model, SFU only need to covert audio stream from OPUS to AAC, and video stream is by-pass for both WebRTC and RTMP is H.264.

Because of the audio transcoding, there are few of SFU servers can do this, for example, SRS and Janus.

Note: Janus need ffmpeg to covert RTP packets, while SRS do this natively so it's easy to use.

DVR

SFU can also DVR WebRTC streams to MP4 file, for example:

Chrome ---WebRTC---> SFU ---DVR--> MP4

This enable you to use a web page to upload MP4 file. For example, to allow user to record a clip of camera to feedback for your product.

Similar to live streaming, MP4 file support AAC better, so SFU need to convert OPUS to AAC.

Because of the audio transcoding, there are few of SFU servers can do this, for example, SRS.

Note: I'm not sure which SFU servers supports this, please let me know if I miss something.

Network Quality

On internet, the SFU model is better than P2P model. Consider about the bellow flow:

PeerA <----Internet--> PeerB

P2P seems simple and high efficiency, but there are actually lots of routers and network devices, generally they are servers, so the flow of P2P model should be:

PeerA <--------Internet----------> PeerB
         Routers, Servers, etc.

From the perspective of network transport, the SFU model is similar:

PeerA <--------SFU-Server----------> PeerB
         Routers, Servers, etc.

SFU network quality is better than P2P, not about the server but you are able to control the transport network by use dedicated server and even dedicated network.

But for P2P, you can't control the routers and servers, all peers are clients.

Note: TURN server model also improve network quality, but SFU is still better because you could use some QoS algorithms such as GCC on SFU, because SFU server is actually a client, but TURN is just a proxy.

Note: SFU cluster, which is built of a set of SFU servers, can also iprove the quality when peers crossing countries.

Firewall Traverse

For some users behind enterprise firewall, UDP is not available:

        Firewall
            |
Chrome -----X---WebRTC--- Chrome(PeerB)
PeerA       |   (UDP)

Even worse, only HTTP(TCP/80) or HTTPS(TCP/443) is allowed by some firewall. So we can use SFU which listen at HTTP(TCP/80) or HTTPS(TCP/443), it works like this:

        Firewall
            |
Chrome -----+---WebRTC-------> SFU ---> Chrome(PeerB)
PeerA       |   (TCP 80/443)

Note: Yep, TURN server can also solve this problem, coturn as such, but note that TURN server usually allocate a set of port, not fixed ports, so TURN server is not easy to use as SFU server.

There are few of SFU servers can do this, for example, SRS and Mediasoup.

Note: I'm not sure which SFU servers supports this, please let me know if I miss something.

Henry Ecker
  • 34,399
  • 18
  • 41
  • 57
Winlin
  • 1,136
  • 6
  • 25
0

According to my question on top and many research after, I found that :

SFU is the technology (server-side) that Leads the WebRTC communication:

  • How to Produce (share) the stream between peers.
  • how to Consume this stream of media in the other peers.
  • How the topology-if I could say-that works between PRODUCERS (the ones who share the streaming) and the CONSUMERS.

This is the global idea about it since you have to go deeper for the implementation.

The services I asked about like Mediasoup, Medooze...etc, they are services that implement that technology of SFU.

You could go to one of them and learn how to implement the SFU throw it.

CTMA
  • 63
  • 1
  • 9