0

I'm interested in working with ARKit 3 and a couple of iPads to create a multi-user (collaborative) experience, as support for collaborative AR seems to have improved according to WWDC '19.

Apple talks a lot about face tracking and motion capture, but it sounds like this is only supported on the front facing camera (facing the person holding the device.) Is there no way to do face tracking of your friends who are sharing the experience? In the WWDC demo video, it looks like the motion capture character is being generated from a person in the user's view, and the Minecraft demo shows people in the user's view being mixed with Minecraft content in AR. This suggests that the back camera is handling this. Yet, I thought the point of AR was to attach virtual objects to the physical world in front of you. Reality Composer has an example with face tracking and a quote bubble that would follow the face around, but because I do not have a device with a depth camera, I do not know if the example is meant to have that quote bubble follow you, the user around, or someone else in the camera's view.

In short, I'm a little confused about what sorts of things I can do with face tracking, people occlusion, and body tracking with respect to other people in a shared AR environment. Which camera are in use, and which features can I apply to other people as opposed to just myself (selfie style)?

Lastly, assuming that I CAN do face and body tracking of other people in my view, and that I can do occlusion for other people, would someone direct me to some example code? I'd also like to use the depth information from the scene (again, if that's possible), but maybe this requires some completely different API.

Since I don't yet have a device with a TrueDepth camera, I can't really test this myself using the example project here: https://developer.apple.com/documentation/arkit/tracking_and_visualizing_faces I am trying to determine based on people's answers whether I can create the system I want in the first place before purchasing the necessary hardware.

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
synchronizer
  • 1,955
  • 1
  • 14
  • 37

1 Answers1

0

ARKit 3 provides the ability to use both front and back cameras at the same time.

Face tracking uses the front camera and requires a device with a TrueDepth camera. ARKit 3 can now track up to three faces with the front camera. Face tracking allows you to capture detailed facial movements.

Body tracking and motion capture is performed with he rear camera. This allows a body to be detected and mapped onto a virtual skeleton that your app can use to capture position data.

For example, you could capture body motion of someone using the rear camera and the facial expression of the person watching that motion using the front camera and combine that in the one ARKit scene.

Paulw11
  • 108,386
  • 14
  • 159
  • 186
  • So Apple does not provide an API for facial expression detection of people in the scene. My understanding, then, is that I would need to use a different API or write my own computer vision algorithms to support this. (Thanks for helping with both my my related questions. I needed to split them up since they were so different.) – synchronizer Aug 15 '19 at 20:50
  • 2
    The hardware isn't there on the rear camera. The TrueDepth camera projects thousands of infrared dots onto the user's face and uses that to derive a map of the face. – Paulw11 Aug 15 '19 at 20:53
  • Do you mean to say that the rear camera isn't good enough, period, for third-party CV algorithms to do successful facial expression tracking in their own ways? – synchronizer Aug 15 '19 at 20:54
  • You could certainly try and achieve the same through software for the rear camera but it will be less accurate than what can be achieved with a TrueDepth device and require more processing. – Paulw11 Aug 15 '19 at 20:56
  • Hi @Paulw11! I was just reading your suggestion here and I am wondering whether you would know that how to get the Truth Depth camera parameters for the face tracking? https://stackoverflow.com/questions/62927167/swift-get-the-truthdepth-camera-parameters-for-face-tracking-in-arkit – swiftlearneer Jul 20 '20 at 04:45