9

I'm an IT student, and would like to know (understand) more about the Augmented Faces API in ARCore.

I just saw the ARCore V1.7 release, and the new Augmented Faces API. I get the enormous potential of this API. But I didn't see any questions or articles on this subject. So I'm questioning myself, and here are some assumptions / questions which come to my mind about this release.

Assumption

  • ARCore team are using (Like Instagram and Snapchat) machine learning, to generate landmarks all over the face. Probably HOG Face Detection..

Questions

  • How does ARCore generate 468 points all over the user face's on a Smartphone ? Impossible to find any response on that, even in the source code.
  • How they can have the depth from a simple smartphone camera ?
  • How to decline the Face detection / tracking, to a custom object or another part of the body like a Hand ?

So if you have any advices or remarks on this subject, let's share !

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
Zenocode
  • 656
  • 7
  • 19

2 Answers2

9

ARCore's new Augmented Faces API, that is working on the front-facing camera without depth sensor, offers a high quality, 468-point 3D canonical mesh that allows users attach such effects to their faces as animated masks, glasses, skin retouching, etc. The mesh provides coordinates and region specific anchors that make it possible to add these effects.

I believe that a facial landmarks detection is generated with a help of computer vision algorithms under the hood of ARCore 1.7. It's also important to say that you can get started in Unity or in Sceneform by creating an ARCore session with the "front-facing camera" and Augmented Faces "mesh" mode enabled. Note that other AR features such as plane detection aren't currently available when using the front-facing camera. AugmentedFace extends Trackable, so faces are detected and updated just like planes, Augmented Images, and other Trackables.

enter image description here

As you know, several years ago Google released Face API that performs face detection, which locates faces in pictures, along with their position (where they are in the picture) and orientation (which way they’re facing, relative to the camera). Face API allows you detect landmarks (points of interest on a face) and perform classifications to determine whether the eyes are open or closed, and whether or not a face is smiling. The Face API also detects and follows faces in moving images, which is known as face tracking.

So, ARCore 1.7 just borrowed some architectural elements from Face API and now it's not only detects facial landmarks and generates 468 points for them but also tracks them in real time at 60 fps and sticks 3D facial geometry to them.

See Google's Face Detection Concepts Overview.

enter image description here

To calculate a depth channel in a video, shot by moving RGB camera, is not a rocket science. You just need to apply a parallax formula to tracked features. So if a translation's amplitude of a feature on a static object is quite high – the tracked object is closer to a camera, and if an amplitude of a feature on a static object is quite low – the tracked object is farther from a camera. These approaches for calculating a depth channel is quite usual for such compositing apps as The Foundry NUKE and Blackmagic Fusion for more than 10 years. Now the same principles are accessible in ARCore.

You cannot decline the Face detection/tracking algorithm to a custom object or another part of the body like a hand. Augmented Faces API developed for just faces.

Here's how Java code for activating Augmented Faces feature looks like:

// Create ARCore session that supports Augmented Faces
public Session createAugmentedFacesSession(Activity activity) throws 
                                                      UnavailableException {

    // Use selfie camera
    Session session = new Session(activity, 
                                  EnumSet.of(Session.Feature.FRONT_CAMERA));

    // Enabling Augmented Faces
    Config config = session.getConfig();
    config.setAugmentedFaceMode(Config.AugmentedFaceMode.MESH3D);
    session.configure(config);
    return session;
}

Then get a list of detected faces:

Collection<AugmentedFace> fl = session.getAllTrackables(AugmentedFace.class);

And at last rendering the effect:

for (AugmentedFace face : fl) {

    // Create a face node and add it to the scene.
    AugmentedFaceNode faceNode = new AugmentedFaceNode(face);
    faceNode.setParent(scene);

    // Overlay the 3D assets on the face
    faceNode.setFaceRegionsRenderable(faceRegionsRenderable);

    // Overlay a texture on the face
    faceNode.setFaceMeshTexture(faceMeshTexture);
}
Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
  • 1
    Thank you very much @ARGeo, I totally understand the FaceDetection now ! I was looking for the `Trackable` on `AugmentedFaces` because that's why I thought it's was possible to decline it to all kinds of custom objects / body parts. I could create my own Hand Landmarks Detector like the FaceDetection, and do the same as `AugmentedFaces` class extends `Trackable`, then be able to generate a mesh to apply filter on it ? – Zenocode Feb 26 '19 at 09:24
  • I suppose Google gives full access to unique tools and classes for ARCore developers in some time. Let's wait for it. At the moment even documentation isn't ready. ))) – Andy Jazz Feb 26 '19 at 09:33
  • is augmented faces supporting ios? – Maulik Kundaliya Jun 21 '19 at 10:20
  • I don't know. I haven't tried Augmented Faces for iOS. – Andy Jazz Jun 21 '19 at 11:50
  • @MaulikKundaliya Apple realize their own Face tracking for ARKit, take a look at https://developer.apple.com/documentation/arkit/tracking_and_visualizing_faces – Zenocode Jul 03 '19 at 11:35
  • Can you tell how i can set depth of a object like should show on face – Sonam Gupta Jan 06 '20 at 12:09
  • @SonamGupta Please, formulate your question more specifically. – Andy Jazz Jan 06 '20 at 12:22
  • 1
    When we add 3d object like spects frames on face. It does not show in correct depth.Please look on screensort. https://drive.google.com/file/d/1NBAu64LpRLgfJ7M7UU8DDh0iuIzwUhKB/view?usp=sharing – Sonam Gupta Jan 07 '20 at 06:10
  • Create a new question, please, I'll answer it. – Andy Jazz Jan 07 '20 at 06:21
  • As you said. i created new question https://stackoverflow.com/questions/59640401/object-does-not-show-in-correct-depth-in-face-augumentation-in-ar-core Please help me on this – Sonam Gupta Jan 08 '20 at 06:23
0

ARCore doesn't have face detection feature, so it might be better to use MLKit instead

https://developers.google.com/ml-kit/vision/face-detection?hl=en

chloe
  • 1
  • 1