2

Here's my code to capture video stream and encode to webm (Using https://github.com/GoogleChromeLabs/webm-wasm):

async function captureWebm() {
    console.log("Started webm capture")
    worker.postMessage("./webm-wasm.wasm");
    await nextEvent(worker, "message");
    console.log('webm worker loaded')
    worker.postMessage({
        width: w,
        height: h,
        timebaseNum: 1,
        timebaseDen: 30,
        bitrate: 1500,
        realtime: true
    });
    let encodeWebm = async function() {
        mCtx.drawImage(player, 0, 0, w, h);
        const imageData = mCtx.getImageData(0, 0, w, h);
        const buffer = imageData.data.buffer;
        worker.postMessage(buffer, [buffer]);
        requestAnimationFrame(encodeWebm);
    };
    requestAnimationFrame(encodeWebm);
}

And here's the listener:

let queue = [];
worker.onmessage = ev => {
    if (!ev.data) {
        console.log('End of stream');
    }
    if (ev.data instanceof ArrayBuffer && ev.data.byteLength !== undefined) {
        const arrayBuffer = ev.data;
        queue.push(arrayBuffer);
    }
};

And finally build the video:

setInterval(function () {
   let webm = buildWebmVideoFromArrayOfBuffer();   
   queue = [];
   socket.send(webm);      
}, 500);

Without using MediaSource how can you build a webm video from this array of buffer? My goal is the build a webm video every 500 ms.

quarks
  • 33,478
  • 73
  • 290
  • 513
  • Why wouldn't you just use MediaRecorder? Also, are you actually looking for a sequence of independently playable 500ms videos? For that, you'd need a way to set the GOP length to like, 15, which is very low and not efficient at all. I also don't think you can set the GOP length with MediaRecorder. Maybe you can with this WASM module... is that why you're using it? – Brad Apr 20 '20 at 00:28
  • I'm doing live streaming right now using MJPEG/AVI container using my custom encoder/decoder, works fine, fast and real-time, the only issue is that size of JPEGs over time consumes more bandwidth so I want to encode using WEBM first before transmission over the network with the same methodology, to create 500ms chunk video, that is 15 frames per 500 ms, which is good for 30 FPS. – quarks Apr 20 '20 at 01:13
  • 1
    It doesn't sound like you need individually playable chunks. You could just store the initialization segment separately. Why not MediaRecorder?... especially if you care about performance. And, if you really need a GOP size of 15 (I'd assume, for latency) why not consider the whole WebRTC stack and get all of those optimizations for free? – Brad Apr 20 '20 at 01:22
  • Yes, I have considered WebRTC but at this phase I have to do this approach first. When you say use `MediaRecorder` you mean, `MediaRecorder` can record 500 ms chucks of Webm video that is playable individually? I've tried it but the output is just the same with the webm-wasm, which is array buffer. – quarks Apr 20 '20 at 06:41
  • You didn't clarify why you want them to be played individually... that's the part of your requirement I don't understand. – Brad Apr 20 '20 at 14:49
  • For live streaming, viewers can be able to play stream at that interval. Viewers come and go from the socket so I was thinking that it would need to have each stream as independent video chunk. – quarks Apr 20 '20 at 17:31
  • You can keep the initialization data separate from the subsequent chunks of data. – Brad Apr 20 '20 at 18:03
  • @Brad i successfully done a MediaRecorder -> DataChannels -> MediaSource, but the facing problems when a client/peer enters an "ongoing" stream/broadcast , as the data captures by the MediaRecorder dont contain the initialization as i can see? Just capturung the "first frames" from the recorder prolly dont do the trick? So initialization data? Can you clearify. Regards Magnus – dathor May 14 '20 at 14:19
  • @dathor Download a copy of EBMLViewer (or equivalent) and open up a WebM file. (You'll have to use a regular file, not a streaming one since EBMLViewer doesn't know how to handle indefinite length files, but this is just for demonstration purposes.) Take a look at its structure, and you'll see a bunch of data up to the first Cluster element. Everything before the first Cluster can be treated as initialization data. That means that in your implementation, all you have to do is buffer everything until the first Cluster and keep a copy of it as initialization data later. – Brad May 14 '20 at 14:26
  • @Brad I have prepended the intialization data into other chunk, but other chunk could not play, can you please suggest something on this question, https://stackoverflow.com/questions/62236838/how-to-play-webm-files-individually-which-are-created-by-mediarecorder – Suman Bogati Jun 06 '20 at 19:23
  • I don't get it.. you're not the first and certainly not the last either, coming here asking us how to make a whole live transcoder from scratch which should behave exactly like what WebRTC offers, to avoid setting up a WebRTC system. The former will take months to develop, the latter a single day. Why do you postpone the right solution for later? – Kaiido Jun 07 '20 at 01:42

0 Answers0