I'm trying to create a 2D face unwrap image from a live camera feed from an ARKit session.
I see there is another question about mapping an image onto a mesh. My question is different - it is about generating an image (or many smaller images) from the mesh and camera feed.
- The session detects the user's face and adds a
ARFaceAnchor
to the renderer. - The anchor has a
geometry
object, defining a mesh - Mesh has a large number of vertices
- For each update, there is a corresponding camera image
- Camera image has a pixel buffer with image data
How do I retrieve image data around the face anchor vertices, to "stitch" together a face unwrap from the corresponding camera frames?
var session: ARSession = self.sceneView.session
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
if let faceAnchor = anchor as? ARFaceAnchor,
let faceGeometry: ARSCNFaceGeometry = faceGeometry {
if let pixelBuffer: CVPixelBuffer = session.currentFrame?.capturedImage,
let anchors: [ARAnchor] = session.currentFrame?.anchors {
print("how to get pixel buffer bytes around any given anchor?")
}
for index in 0 ..< faceGeometry.vertices.count {
let point = faceGeometry.vertices[index]
let position = SCNVector3(point.x, point.y, point.z)
print("How to use this position to retrieve image data from pixel buffer around this position?")
}
}
}
Included is a sample of where the face geometry vertices are positioned