3

I am using face-api library. https://github.com/justadudewhohacks/face-api.js

I am trying to get the face position inside the video.

I would like to make an application. It can set my face first position. And then it can give info how much my face moved.

For example, my video's width = 600px, height = 400 px. Then I want to get my eyes position, like my left eye position is 200px from right, 300px from the bottom. That is my left eye's first position. After setting the first position, if I moved, the app shows an alert or pop up window.

Nao
  • 1,363
  • 2
  • 7
  • 8
  • Are you asking about Azure Face API? IF you do , please read the docs, it's all there https://azure.microsoft.com/en-au/services/cognitive-services/face/ – yeya Nov 05 '19 at 13:57
  • Oh, I asked about this. https://github.com/justadudewhohacks/face-api.js Thank you though. – Nao Nov 05 '19 at 21:46

1 Answers1

9

First of all, create the video, stream and load all the models. Make sure you load all models in Promise.all() method.

You can both set the Face Detection and Face Landmarks like this:

video.addEventListener('play', () => {
    // Create canvas from our video element
    const canvas = faceapi.createCanvasFromMedia(video);
    document.body.append(canvas);
    // Current size of our video
    const displaySize = { width: video.width, height: video.height }
    faceapi.matchDimensions(canvas, displaySize);
    // run the code multiple times in a row --> setInterval
    //  async func 'cause it's a async library
    setInterval(async () => {
        // Every 100ms, get all the faces inside of the webcam image to video element
        const detections = await faceapi.detectAllFaces(video, 
        new faceapi.TinyFaceDetectorOptions())
        .withFaceLandmarks().withFaceExpressions();
        // boxes will size properly for the video element
        const resizedDetections = faceapi.resizeResults(detections, displaySize);
        // get 2d context and clear it from 0, 0, ...
        canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height);
        faceapi.draw.drawDetections(canvas, resizedDetections);
        faceapi.draw.drawFaceLandmarks(canvas, resizedDetections);
        faceapi.draw.drawFaceExpressions(canvas, resizedDetections);
    }, 100)
});

Then you can retrieve the Face Landmark points and contours.

This is for all Face Landmark positions:

const landmarkPositions = landmarks.positions

This for the positions of individual marks:

// only available for 68 point face landmarks (FaceLandmarks68)
const jawOutline = landmarks.getJawOutline();
const nose = landmarks.getNose();
const mouth = landmarks.getMouth();
const leftEye = landmarks.getLeftEye();
const rightEye = landmarks.getRightEye();
const leftEyeBrow = landmarks.getLeftEyeBrow();
const rightEyeBrow = landmarks.getRightEyeBrow();

For the position of the left eye, you can create an async function in video.addEventListener and get the first position of your left eye:

video.addEventListener('play', () => {
    ...
    async function leftEyePosition() {
         const landmarks = await faceapi.detectFaceLandmarks(video)
         const leftEye = landmarks.getLeftEye();
         console.log("Left eye position ===>" + JSON.stringify(leftEye));
    }
});
Ömürcan Cengiz
  • 2,085
  • 3
  • 22
  • 28
  • Thank you so much!!!!!!! I did like this! `const detections = await faceapi .detectAllFaces(video, new faceapi.TinyFaceDetectorOptions()) .withFaceLandmarks();` – Nao Nov 24 '19 at 01:07
  • It is okay to not load FaceExpressions :) Glad to help. – Ömürcan Cengiz Nov 24 '19 at 17:40
  • 1
    We usually mark the answer as correct if it helped you :) – Ömürcan Cengiz Nov 29 '19 at 22:19
  • I get here 5 points like: 0: od {_x: 53101.80255080483, _y: 38695.32022788042} even when my video/canvas is only 720x560px - what do those points mean exactly? how can I use them to position a div (DOM not Canvas) at the eyes position? – Suisse Jul 07 '20 at 20:53