2

I am using MediaRecorder API to record videos in web applications. The application has the option to switch between the camera and screen. I am using Canvas to augment stream recording. The logic involves capturing stream from the camera and redirecting it to the video element. This video is then rendered on canvas and the stream from canvas is passed to MediaRecorder. What I noticed is that switching from screen to video (and vice-versa) works fine as long as the user doesn't switch/minimize the chrome window. The canvas rendering uses requestAnimationFrame and it freezes after the tab loses its focus.

Is there any way to instruct chrome not to pause the execution of requestAnimationFrame? Is there any alternate way to switch streams without impacting MediaRecorder recording?

Update: After reading through the documentation, tabs which play audio or having active websocket connection are not throttled. This is something which we are not doing at this moment. This might be a workaround, but hoping for any alternative solution from community. (setTimeout or setInterval are too throttled and hence not using that, plus it impacts rendering quality)

Update 2: I could able to fix this problem using Worker. Instead of using Main UI Thread for requestAnimationFrame, the worker is invoking the API and the notification is sent to Main Thread via postMessage. Upon completion of rendering by UI Thread, a message is sent back to Worker. There is also a delta period calculation to throttle overwhelming messages from worker.

CuriousMind
  • 3,143
  • 3
  • 29
  • 54
  • May you include a portion of your code ? – clota974 Jan 21 '20 at 19:34
  • Hi @clota974 thanks for your comment. The code/logic to invoke requestAnimationFrame is pretty much similar to examples available online. My problem is that I need a workaround/solution to not throttle requestAnimationFrame when chrome window looses focus or is minimized. – CuriousMind Jan 22 '20 at 05:55
  • https://stackoverflow.com/questions/40687010/canvascapturemediastream-mediarecorder-frame-synchronization/40691112#40691112 Though I'm not sure the accepted hack there still works... (finally not closing as dupe because there might indeed be ways to switch streams without relying on a canvas) – Kaiido Jan 22 '20 at 07:05
  • Hii @Kaiido, Thanks for the link. But as mentioned in your answer, does it stop capturing stream from canvas to mediastream? – CuriousMind Jan 22 '20 at 07:25
  • Normally not. But in reality, browsers may not paint on the canvas... So that's not really reliable. However I may have a solution... fro Chrome. – Kaiido Jan 22 '20 at 07:47
  • This is a browser bug/flaw/design choice that we're stuck with unfortunately for the moment. There are a handful of bug reports with minor activity over the last couple years, but I suspect that they won't get traction due to ever increasing throttling. I hope I'm wrong and just pessimistic. :-) – Brad Jan 25 '20 at 04:22
  • Brad, thanks for your time. I was frustrated with this behavior, specially in my case, we require this functionality. I was able to get some workaround using web worker (it is working in chrome, haven't yet tested in Firefox) – CuriousMind Jan 25 '20 at 06:02

2 Answers2

2

There is an ongoing proposal to add a .replaceTrack() method to the MediaRecorder API, but for the time being, the specs still read

If at any point, a track is added to or removed from stream’s track set, the UA MUST immediately stop gathering data, discard any data that it has gathered [...]

And that's what is implemented.


So we still have to rely on hacks to make this by ourselves...

The best one is probably to create a local RTC connection, and to record the receiving end.

// creates a mixable stream
async function mixableStream( initial_track ) {
  
  const source_stream = new MediaStream( [] );
  const pc1 = new RTCPeerConnection();
  const pc2 = new RTCPeerConnection();
    pc1.onicecandidate = (evt) => pc2.addIceCandidate( evt.candidate );
    pc2.onicecandidate = (evt) => pc1.addIceCandidate( evt.candidate );

  const wait_for_stream = waitForEvent( pc2, 'track')
    .then( evt => new MediaStream( [ evt.track ] ) );

    pc1.addTrack( initial_track, source_stream );
  
  await waitForEvent( pc1, 'negotiationneeded' );
  try {
    await pc1.setLocalDescription( await pc1.createOffer() );
    await pc2.setRemoteDescription( pc1.localDescription );
    await pc2.setLocalDescription( await pc2.createAnswer() );
    await pc1.setRemoteDescription( pc2.localDescription );
  } catch ( err ) {
    console.error( err );
  }
  
  return {
    stream: await wait_for_stream,
    async replaceTrack( new_track ) {
      const sender = pc1.getSenders().find( ( { track } ) => track.kind == new_track.kind );
      return sender && sender.replaceTrack( new_track ) ||
        Promise.reject( "no such track" );
    }
  }  
}


{ // remap unstable FF version
  const proto = HTMLMediaElement.prototype;
  if( !proto.captureStream ) { proto.captureStream = proto.mozCaptureStream; }
}

waitForEvent( document.getElementById( 'starter' ), 'click' )
  .then( (evt) => evt.target.parentNode.remove() )
  .then( (async() => {

  const urls = [
    "2/22/Volcano_Lava_Sample.webm",
    "/a/a4/BBH_gravitational_lensing_of_gw150914.webm"
  ].map( (suffix) => "https://upload.wikimedia.org/wikipedia/commons/" + suffix );
  
  const switcher_btn = document.getElementById( 'switcher' );
  const stop_btn =     document.getElementById( 'stopper' );
  const video_out =    document.getElementById( 'out' );
  
  let current = 0;
  
  // see below for 'recordVid'
  const video_tracks = await Promise.all( urls.map( (url, index) =>  getVideoTracks( url ) ) );
  
  const mixable_stream = await mixableStream( video_tracks[ current ].track );

  switcher_btn.onclick = async (evt) => {

    current = +!current;
    await mixable_stream.replaceTrack( video_tracks[ current ].track );
    
  };

  // final recording part below

  // only for demo, so we can see what happens now
  video_out.srcObject = mixable_stream.stream;

  const rec = new MediaRecorder( mixable_stream.stream );
  const chunks = [];

  rec.ondataavailable = (evt) => chunks.push( evt.data );
  rec.onerror = console.log;
  rec.onstop = (evt) => {

    const final_file = new Blob( chunks );
    video_tracks.forEach( (track) => track.stop() );
    // only for demo, since we did set its srcObject
    video_out.srcObject = null;
    video_out.src = URL.createObjectURL( final_file );
    switcher_btn.remove();
    stop_btn.remove();

        const anchor = document.createElement( 'a' );
    anchor.download = 'file.webm';
    anchor.textContent = 'download';
        anchor.href = video_out.src;
    document.body.prepend( anchor );
    
  };

  stop_btn.onclick = (evt) => rec.stop();

  rec.start();
      
}))
.catch( console.error )

// some helpers below



// returns a video loaded to given url
function makeVid( url ) {

  const vid = document.createElement('video');
  vid.crossOrigin = true;
  vid.loop = true;
  vid.muted = true;
  vid.src = url;
  return vid.play()
    .then( (_) => vid );
  
}

/* Records videos from given url
** @method stop() ::pauses the linked <video>
** @property track ::the video track
*/
async function getVideoTracks( url ) {
  const player = await makeVid( url );
  const track = player.captureStream().getVideoTracks()[ 0 ];
  
  return {
    track,
    stop() { player.pause(); }
  };
}

// Promisifies EventTarget.addEventListener
function waitForEvent( target, type ) {
  return new Promise( (res) => target.addEventListener( type, res, { once: true } ) );
}
video { max-height: 100vh; max-width: 100vw; vertical-align: top; }
.overlay {
  background: #ded;
  position: fixed;
  z-index: 999;
  height: 100vh;
  width: 100vw;
  top: 0;
  left: 0;
  display: flex;
  align-items: center;
  justify-content: center;
}
<div class="overlay">
  <button id="starter">start demo</button>
</div>
<button id="switcher">switch source</button>
<button id="stopper">stop recording</button> 
<video id="out" muted controls autoplay></video>

Otherwise you can still go the canvas way, with the Web Audio Timer I made for when the page is blurred, even though this will not work in Firefox since they do internally hook to rAF to push new frames in the recorder...

Kaiido
  • 123,334
  • 13
  • 219
  • 285
  • I have tried switching tracks using addXXX and removeXXX methods, but with that in place, MediaRecorder stops recording stream. – CuriousMind Jan 22 '20 at 09:21
  • @CuriousMind got some time to make an hack, which works only of Firefox for I don't know what reasons (no more time to dig...) – Kaiido Jan 24 '20 at 10:20
  • I truly appreciate your efforts. Let me tell you a secret too. I got something to work, by delegating requestAnimationFrame to worker and then processing web worker messages in Main thread. It is working with Chrome. Found this article to be very useful (https://threejsfundamentals.org/threejs/lessons/threejs-offscreencanvas.html) I couldn't make use of OffscreenCanvas, but I liked the webworker processing stuffs. – CuriousMind Jan 25 '20 at 05:59
  • OffscreenCanvas won't help much, MediaRecorder is not available in Workers. And beware, while the WebWorker timer may be working now, nothing actually forces the browser to keep the main thread active, or wake it to handle the message. They can very well just store the messages and handle these at wake up. This is I guess what will happen in a near future when the [Page lifecycle API](https://wicg.github.io/page-lifecycle/) will be here. While they have to handle events from the AudioAPI seamlessly, to avoid ugly clicks and what else. – Kaiido Jan 26 '20 at 00:08
  • Thanks @Kaiido, I am going through the document. What you say is right. MediaRecorder doesn't work in web worker and OffscreenCanvas is not working for me (since, I am allowing switching streams from camera to screen and vice-versa, the Video control cannot be transferred to worker as Transferable object). While I am able to use workers at this point and the message exchange is happening, it may fail. – CuriousMind Jan 28 '20 at 06:01
  • But I also feel there has to be a way wherein developers are given opportunity to request permission from user to continue background processing (similar to permissions requested in mobile). That should solve the problem. – CuriousMind Jan 28 '20 at 06:04
1

I had the same problem and trying to figure it out without too much complexity such as Canvas or SourceBuffer.

I used the PeerConnection for same page to make a connection. Once the connection is made you can use a rtpSender via peerconnection.addTrack And from there you can easily switch.

I just made a library and a demo that you can find: https://github.com/meething/StreamSwitcher/

QVDev
  • 1,093
  • 10
  • 17
  • I would like to try this. What if we dont have a video object on the screen? I am using video conferencing code to do all that but have a separate local media stream to save HQ quality. The code looks like it uses the srcObject on the video element. Can I update on the media recorder object? – John Jan 29 '21 at 18:32
  • 1
    Hi @John, the question is not really clear could you elaborate on your question to make it more clear what you are trying to achieve? – QVDev May 20 '21 at 16:32
  • I made use of @QVDev solution in a react project after struggling with the same issue. Here's a codesandbox if anyone wants to use the code: https://codesandbox.io/s/mediarecorderexample-msxm1 – Tosh Velaga Nov 08 '21 at 21:29