5

I'm trying to superimpose ARFaceAnchor vertices on-screen to accomplish two scenarios 1) have a virtual face maintain is center (onscreen) position, but reflect changes in geometry.vertices 2) have the virtual face overlap the actual face (from previewlayer).

I've followed Rickster's advice here, but have only succeeded in projecting the face from certain angles onscreen (only appears lower left and rotates) . I'm not too familiar with the different purposes of each matrix, but this is where I've gotten so far. Any advice?

let modelMatrix = faceAnchor.transform
var points: [CGPoint] = []

faceAnchor.geometry.vertices.forEach {
    // Convert the vertex position from model space to camera space (use the anchor’s transform)
    let vertex4 = vector_float4($0.x, $0.y, $0.z, 1)
    let vertexCamera = simd_mul(modelMatrix, vertex4)

    // Multiply with the camera projection with that vector to get to normalized image coordinates 
    let normalizedImageCoordinates = simd_mul(projectionMatrix, vertexCamera)

    let point = CGPoint(x: CGFloat(normalizedImageCoordinates.x), y: CGFloat(normalizedImageCoordinates.y))
    points.append(point)
} 
sttawm
  • 47
  • 7
jjc
  • 133
  • 8
  • I’m not clear on how your two scenarios can work together. If you’re keeping the virtual face in the center of the screen, how does it stay overlapping the “real” face from the video when the user moves? – rickster Feb 13 '18 at 18:54
  • Sorry rickster- the two scenarios wouldn't work together. They would just be two 'modes' if you will. – jjc Feb 13 '18 at 18:58

1 Answers1

6

For those whom are interested, here's the solution to (2) - you can normalize the points to keep the face centered for (1)

let faceAnchors = anchors.flatMap { $0 as? ARFaceAnchor }

guard !faceAnchors.isEmpty,
    let camera = session.currentFrame?.camera,
    let targetView = SomeUIView() else { return }

// Calculate face points to project to screen

let projectionMatrix = camera.projectionMatrix(for: .portrait, viewportSize: targetView.bounds.size, zNear: 0.001, zFar: 1000)  // A transform matrix appropriate for rendering 3D content to match the image captured by the camera
let viewMatrix = camera.viewMatrix(for: .portrait)        // Returns a transform matrix for converting from world space to camera space.

let projectionViewMatrix = simd_mul(projectionMatrix, viewMatrix)

for faceAnchor in faceAnchors  {

    let modelMatrix = faceAnchor.transform                  //  Describes the face’s current position and orientation in world coordinates; that is, in a coordinate space relative to that specified by the worldAlignment property of the session configuration. Use this transform matrix to position virtual content you want to “attach” to the face in your AR scene.
    let mvpMatrix = simd_mul(projectionViewMatrix, modelMatrix)

    // Calculate points

    let points: [CGPoint] = faceAnchor.geometry.vertices.flatMap({ (vertex) -> CGPoint? in

        let vertex4 = vector_float4(vertex.x, vertex.y, vertex.z, 1)

        let normalizedImageCoordinates = simd_mul(mvpMatrix, vertex4)

        return CGPoint(x: CGFloat(normalizedImageCoordinates.x ),
                       y: CGFloat(normalizedImageCoordinates.y ))
    })

}
sttawm
  • 47
  • 7
jjc
  • 133
  • 8