You didn't what format you want for your webcam-captured images to be delivered. It's pretty easy to deliver them into a <canvas />
element.
- You use gUM to open up a video stream, then
- preview it in a
<video />
element, then
- use drawImage to copy a snapshot of that element to your canvas.
Here's some example code, based on the "official" webrtc sample.
Initialize
const video = document.querySelector('video');
const canvas = document.querySelector('canvas');
canvas.width = 480;
canvas.height = 360;
const button = document.querySelector('button');
Snapshot button click handler
See the drawImage()
method call... that's what grabs the snap of the video preview element.
button.onclick = function() {
/* set the canvas to the dimensions of the video feed */
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
/* make the snapshot */
canvas.getContext('2d').drawImage(video, 0, 0, canvas.width, canvas.height);
};
Start and preview the video stream
navigator.mediaDevices.getUserMedia( {audio: false, video: true })
.then(stream => video.srcObject = stream)
.catch(error => console.error(error);
Obviously this is very simple. The parameter you pass to gUM is a MediaStreamConstraints object. It gives you a lot of control over the video (and audio) you want to capture.