2

I would like to use MediaStream.captureStream() method, but it is either rendered useless due to specification and bugs or I am using it totally wrong.
I know that captureStream gets maximal framerate as the parameter, not constant and it does not even guarantee that, but it is possible to change MediaStream currentTime (currently in Chrome, in Firefox it has no effect but in return there is requestFrame, not available at Chrome), but the idea of manual frame requests or setting the placement of the frame in the MediaStream should override this effect. It doesn't.

In Firefox it smoothly renders the video, frame by frame, but the video result is as long as wall clock time used for processing.
In Chrome there are some dubious black frames or reordered ones (currently I do not care about it until the FPS matches), and the manual setting of currentTime gives nothing, the same result as in FF.

I use modified code from MediaStream Capture Canvas and Audio Simultaneously answer.

const FPS = 30;
var cStream, vid, recorder, chunks = [], go = true,
 Q = 61, rec = document.getElementById('rec'),
 canvas = document.getElementById('canvas'),
 ctx = canvas.getContext('2d');
 ctx.strokeStyle = 'rgb(255, 0, 0)';

function clickHandler() {
 this.textContent = 'stop recording';
 //it has no effect no matter if it is empty or set to 30
 cStream = canvas.captureStream(FPS);
 recorder = new MediaRecorder(cStream);
 recorder.ondataavailable = saveChunks;
 recorder.onstop = exportStream;
 this.onclick = stopRecording;
 recorder.start();
 draw();
}

function exportStream(e) {
 if (chunks.length) {
  var blob = new Blob(chunks)
  var vidURL = URL.createObjectURL(blob);
  var vid2 = document.createElement('video');
  vid2.controls = true;
  vid2.src = vidURL;
  vid2.onend = function() {
   URL.revokeObjectURL(vidURL);
  }
  document.body.insertBefore(vid2, vid);
 } else {
  document.body.insertBefore(document.createTextNode('no data saved'), canvas);
 }
}

function saveChunks(e) {
  e.data.size && chunks.push(e.data);
}

function stopRecording() {
 go = false;
 this.parentNode.removeChild(this);
 recorder.stop();
}

var loadVideo = function() {
 vid = document.createElement('video');
 document.body.insertBefore(vid, canvas);
 vid.oncanplay = function() {
  rec.onclick = clickHandler;
  rec.disabled = false;
  canvas.width = vid.videoWidth;
  canvas.height = vid.videoHeight;
  vid.oncanplay = null;
  ctx.drawImage(vid, 0, 0);
 }

 vid.onseeked = function() {
  ctx.drawImage(vid, 0, 0);
      /*
      Here I want to include additional drawing per each frame,
      for sure taking more than 180ms
      */
  if(cStream && cStream.requestFrame) cStream.requestFrame();
  draw();
 }

 vid.crossOrigin = 'anonymous';
 vid.src = 'https://dl.dropboxusercontent.com/s/bch2j17v6ny4ako/movie720p.mp4';
 vid.currentTime = 0;
}

function draw() {
 if(go && cStream) {
  ++Q;
  cStream.currentTime = Q / FPS;
  vid.currentTime = Q / FPS;
 }
};

loadVideo();
<button id="rec" disabled>record</button><br>
<canvas id="canvas" width="500" height="500"></canvas>

Is there a way to make it operational?
The goal is to load video, process every frame (which is time consuming in my case) and return the processed one.

Footnote: I do not want to use ffmpeg.js, external server or other technologies. I can process it by classic ffmpeg without using JavaScript at all, but this is not the point of this question, it is more about MediaStream usability / maturity. The context is Firefox/Chrome here, but it may be node.js or nw.js as well. If this is possible at all or awaiting bug fixes, the next question would be feeding audio to it, but I think it would be good as separate question.

sideshowbarker
  • 81,827
  • 26
  • 193
  • 197
Evil
  • 460
  • 1
  • 11
  • 25
  • Not really, might be possible when implementations will finally be less buggy and when there will be a better consensus on how things do work. One way would be to use the `MediaRecorder.pause()` and `MediaRecorder.resume()` methods, but I didn't managed to make it work on chrome with a canvasStream yet. Also, one big problem you would face is being able to control frame by frame your original video, since you would need to pause it too while processing your frames. Currently, there is no stable+cross-browser way of doing it. – Kaiido Nov 24 '17 at 07:11
  • 1
    Ps: note that chrome does implement `requestFrame` one the `CanvasCaptureMediaStreamTrack` which is in accordance with latest drafts of the specs, and that this browser doesn't have a `CanvasCaptureMediaStream` like FF. So you would have to do something like `if(stream.requestFrame){stream.requestFrame();}else{stream.getVideoTracks()[0].requestFrame();}` – Kaiido Nov 24 '17 at 07:14
  • @Kaiido I have also tried pause / resume, but it shouts that the resource is not longer usable. I do not play video, I request frames by currentTime and react to onseeked event. Thank you, I didn't know that in Chrome it is implemented in tracks.instead. If you have no workaround in mind, maybe the answer that it is currently impossible would do. – Evil Nov 24 '17 at 07:20
  • 1
    Yes, but even requesting frames by currentTime doesn't produce the same results in both FF and Chrome, FF seems to always jump on next keyFrame, while chrome seems to be able to reproduce the frame from the last one. So you will loose a lot of frames in FF with this technique. (for this browser play.then(rAF(pause)) works better, but in chrome this doesn't work at all...). For an answer, I prefer not now, I may miss something too, and I am still very confused about MediaRecorder.pause breaking in chrome, I will need to investigate more. – Kaiido Nov 24 '17 at 07:53

0 Answers0