0

I implemented CMAF through a self-built nginx server with ffmpeg, but I encountered some technical bottlenecks. My latency always remains at 3 seconds and cannot be further reduced. Additionally, I'm unable to successfully implement chunked transfer.

Briefly describe my environment, I use OBS to push the live stream to the server, then transcode it on the server, and finally push the content to users through CDN.

Here is some of my code

ffmpeg:

sudo ffmpeg -i rtmp://127.0.0.1:1935/live/stream -loglevel 40 -c copy -sc_threshold 0 -g 60 -bf 0 -map 0 -f dash -strict experimental -use_timeline 1 -use_template 1 -seg_duration 1 -window_size 5 -adaptation_sets "id=0,streams=v id=1,streams=a" -streaming 1 -dash_segment_type mp4 -utc_timing_url "http://time.akamai.com/?iso" -movflags frag_keyframe+empty_moov+default_base_moof -ldash 1 -hls_playlist 1 -master_m3u8_publish_rate 1 -remove_at_exit 1 /var/www/html/live/manifest.mpd

nignx config:

server_name myserver.com;
    add_header Access-Control-Allow-Origin *;
    add_header Access-Control-Allow-Methods 'GET, POST, OPTIONS';
    add_header Access-Control-Allow-Headers 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
    add_header Access-Control-Expose-Headers 'Content-Length,Content-Range';
    root /var/www/html;
    index index.html index.nginx-debian.html;
        location / {
            chunked_transfer_encoding on;
        }

html player

<!DOCTYPE html>
<html lang="zh-tw">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>streaming test</title>
    <script src="https://cdn.jsdelivr.net/npm/hls.js@latest"></script>
    <script src="https://cdn.dashjs.org/latest/dash.all.min.js"></script>
    <style>
        body {
            margin: 0;
            font-family: Arial, sans-serif;
            display: flex;
            justify-content: center;
            align-items: center;
            min-height: 100vh;
            background-color: #000;
        }

        #video {
            max-width: 100%;
            max-height: 100vh;
        }
    </style>
</head>
<body>
    <video id="video" controls></video>

    <script>
        const video = document.getElementById('video');
        const hlsSrc = '/live/master.m3u8'; // Replace with your HLS stream URL
        const dashSrc = '/live/stream.mpd'; // Replace with your DASH stream URL

        function isHlsSupported() {
            return Hls.isSupported() || video.canPlayType('application/vnd.apple.mpegurl');
        }

        function isDashSupported() {
            return !!window.MediaSource && !!MediaSource.isTypeSupported('video/mp4; codecs="avc1.4d401e,mp4a.40.2"');
        }

        if (isHlsSupported()) {
            // Use HLS for playback
            const hls = new Hls({
                lowLatencyMode: true,// Enable low-latency mode
                liveSyncDurationCount: 1, // Number of segments used to sync live stream
                liveMaxLatencyDurationCount: 2,// Number of segments used to calculate the latency
                maxBufferLength: 2,// Max buffer length in seconds
                maxBufferSize: 1000 * 1000 * 100,// Max buffer size in bytes
                liveBackBufferLength: 0// Max back buffer length in seconds (0 means back buffer disabled)
            });
            hls.loadSource(hlsSrc);
            hls.attachMedia(video);
            hls.on(Hls.Events.MANIFEST_PARSED, () => {
                video.play();
            });
        } else if (isDashSupported()) {
            // Use DASH for playback
            const player = dashjs.MediaPlayer().create();
            player.initialize(video, dashSrc, true);
            player.updateSettings({
                streaming: {
                    lowLatencyEnabled: true, // Enable low-latency mode
                    liveDelay: 1, // Set live delay in seconds, equal to 3 times the segment duration
                    liveCatchUpPlaybackRate: 1.2, // Playback rate for catching up when behind the live edge
                    liveCatchUpMinDrift: 0.5, // Minimum drift from live edge before initiating catch-up (in seconds)
                    bufferTimeAtTopQuality: 3, // Maximum buffer length in seconds
                    bufferToKeep: 0, // Duration of the back buffer in seconds (disable back buffer)
                }
            });
        } else {
            console.error('Neither HLS nor DASH playback is supported in this browser.');
        }
    </script>
</body>
</html>

I hope to reduce the latency to 1 second.

  • If you need low latency, you're fundamentally choosing the wrong technology. – Brad Jun 13 '23 at 21:05
  • Yes, I know other technologies can more easily achieve low latency, but what I want is the compatibility of CMAF. Also, it's possible to achieve 1-second latency with CMAF, right? – dannyomni Jun 14 '23 at 01:35
  • 'eh, probably someone has done it, but in practice that latency target is totally unreasonable for any sort of chunked transport. Hilariously enough, the industry loves things like this but it's all just getting back to the same old HTTP transfer we had in the 90s, which can be done today with glass-to-glass latency well below 500ms. And, you can't get much more compatible than playing a stream with a simple ` – Brad Jun 14 '23 at 01:44
  • You should pick the right set of tradeoffs for your application. First question you need to answer is what sort of scale you need, because your choice of CDN dictates what tech you can use. While you're deciding that, you also need to determine at what cost you are willing to pay for that latency. Another 500-1000 milliseconds may not matter to your viewers, but might quadruple your costs. – Brad Jun 14 '23 at 01:45
  • My target audience is likely in the hundreds, and real-time interactivity is important. But if I only use the ``` – dannyomni Jun 14 '23 at 02:07
  • Once smooth playback is established, you can drift forward by slightly increasing the playback rate. This is the same thing that HLS players do. The difference is, you can skip all that overhead. You might have parallel encoding and regular HLS distribution anyway, for those with inadequate connections. For what it's worth, I can only think of a few use cases where true real-time interactivity is required for **all users**. An illusion of realtimeness (just a few seconds of latency) for some users can save you a ton of expense. – Brad Jun 14 '23 at 03:15
  • okey, I might try to modify my html – dannyomni Jun 14 '23 at 07:38
  • @Brad - very interesting discussion here! Low latency DASH and HLS do seem to be getting scarily complex so basic HTTP streaming for live streams using the video tag and with glass to glass 500ms sounds very appealing. Do you have any more background or examples - I'd be very interested to explore a bit more? – Mick Jun 23 '23 at 09:59
  • @Mick The way I have done this is to use a simple Node.js server that runs FFmpeg in a child process. FFmpeg is configured to output WebM. The Node.js server lightly parses the stream, tracking everything before the first Cluster element as initialization data. When a new client connects, the server sends it initialization data and then starts sending Cluster elements from the live stream. The browser will start playback immediately. As mentioned previously, you may need to drift to handle the buffer in a smooth way, as not all clients have robust connections. – Brad Jun 23 '23 at 20:24
  • @Mick See also: https://stackoverflow.com/a/56062582/362536 – Brad Jun 23 '23 at 20:24
  • @Brad - Thanks. Really interesting. I'll take a loser look. – Mick Jun 24 '23 at 11:28

0 Answers0