2

I have two videos here showing the rendering of a decoded MJPEG video sequence this one only rending one video:

image description

Video description: Left (Source), Middle (Canvas copy of stream), Right (Decoded from network).

At this point the video is smooth both from source to network and back (websocket). And at least up to 5 video decoded and rendered is reasonably smooth. However if I render like 20 videos things start to lag:

image description

My question is what is the best algorithm that will allow to render (or in multi-thread context) faster. Here's my code:

        socket.onmessage = function (m) {
            let blob = m.data;
            videoDecoder.postMessage(blob);
            videoDecoder2.postMessage(blob);
            videoDecoder3.postMessage(blob);
            // and so on... to 20
        }

For the purpose of testing, I just post all blobs to different video decoder web workers. Now for the rendering part:

  videoDecoder.onmessage = async function (e) {
      await renderImage(dCtx2, e.data);
  };

So in my code I have 20 of these onmessage handler (again for simplicity it's just copy and paste code.)

  async function renderImage(ctx, blob) {
    const isChrome = !!window.chrome && (!!window.chrome.webstore || !!window.chrome.runtime);
    if(isChrome) {
      const blobURL = URL.createObjectURL(blob);
      const img = new Image();
      img.onload = function() {
        ctx.drawImage(img, 0, 0);
        URL.revokeObjectURL(blobURL);
      };
      img.src = blobURL;
    } else {
      const bmp = await createImageBitmap(blob);
      ctx.drawImage(bmp, 0, 0);
      bmp.close();
    }
  }

Since the video decoders are basically Web Workers so running on each different thread so it's not the cause of drop of FPS in the canvas "players" in the UI. The renderImage method here is the main cause and it is the one that lags the main thread (the UI thread). What can be done to make each canvas context render on web workers perhaps so regardless how many players I have here it wont affect each other?

UPDATE:

I moved the rendering to web workers, as such passing the context down the web worker:

videoDecoders[i] = new Worker("videoDecoder.js");
const canvas = document.querySelector("#canvas" + (i+1)); // get the canvas
const offscreen = canvas.transferControlToOffscreen();
videoDecoders[i].postMessage({ action: 'init', canvas: offscreen }, [offscreen] );

Then basically it is just similar render image to the canvas but within the web worker thread.

And it did not help with making the playback smooth.

quarks
  • 33,478
  • 73
  • 290
  • 513
  • You mean you have one Web-Worker per decoder? Having more than `navigator.hardwareConcurrency` will make each of them slower. As for the actual question, obviously using a real video stream would make things easier to handle for the browser, rather than having to decode 20 still images every frames -> Use MediaStreams. – Kaiido Apr 21 '20 at 05:21
  • Yes one video decoder for each video channel. And in my case video coming from the socket (source) is a MJPEG video container. And I have this video decoder that extracts images from the binary through looping with markers. How can I use MediaStreams in this context? – quarks Apr 21 '20 at 05:33
  • Replace your WebSockets settings to either a video-stream (e.g MPEG-DASH), or to a WebRTC setting. – Kaiido Apr 21 '20 at 05:38
  • @Kaiido I think I am getting what you mean now, do you mean like this: https://github.com/GoogleChromeLabs/webm-wasm/blob/master/demo/live.html if yes then my problem with this approach is how do you stream array buffers to the network and expect the receivers to be able to play the stream if they arrive in the middle of the stream, example stream started 15 minutes ago, that is my question. Which brings up this question: https://stackoverflow.com/questions/61303618/create-webm-video-blob-from-series-of-arraybuffer – quarks Apr 21 '20 at 05:39
  • No I mean stream whatever data you have to a server, then let the server broadcast this as a streaming video. For instance I think janus gateway can do it pretty easily, otherwise, [Muaz-Khan's RTCMultiConnection](https://github.com/muaz-khan/RTCMultiConnection) seems to be doing something like this too. – Kaiido Apr 21 '20 at 06:01
  • Will look into it, but now, the issue I have for this post is mainly rendering and not really related to networking atm. There must be some way to pass down the context of a canvas down to a web worker initialization, then every decoded frame it will just render via web worker thread. That may be something I am looking for. – quarks Apr 21 '20 at 06:53
  • But rendering a still image per frame is far more work than rendering video frames. By using real videos, your browser would have far less work to do, and thus would less stress out about rendering many of these. – Kaiido Apr 21 '20 at 06:57
  • You're right but I am still bound to use Websockets. Also my websocket implementation lag is not that much as you can see here: https://media.giphy.com/media/H8LJrqHmr7bNndGiah/giphy.gif and that's for 25 participants in the socket. – quarks Apr 21 '20 at 07:10
  • 1
    @kaiido I wonder how Z_O-O_M managed to implement such canvas approach in rendering their video before. I did some research for the past few days and did some testing, it seems WASM is able to achieve this through shared memory to the canvas, i.e `uint8clampedarray` – quarks Apr 28 '20 at 02:06

0 Answers0