5

I am running a face tracking configuration in ARKit with SceneKit, in each frame i can access the camera feed via the snapshot property or the capturedImage as a buffer, i have also been able to map each face vertex to the image coordinate space and add some UIView helpers(1 point squares) to display in realtime all the face vertices on the screen, like this:

func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
    guard let faceGeometry = node.geometry as? ARSCNFaceGeometry,
        let anchorFace = anchor as? ARFaceAnchor,
        anchorFace.isTracked
        else { return }

    let vertices = anchorFace.geometry.vertices

    for (index, vertex) in vertices.enumerated() {
        let vertex = sceneView.projectPoint(node.convertPosition(SCNVector3(vertex), to: nil))
        let xVertex = CGFloat(vertex.x)
        let yVertex = CGFloat(vertex.y)

        let newPosition = CGPoint(x: xVertex, y: yVertex) 
    // Here i update the position of each UIView in the screen with the calculated vertex new position, i have an array of views that matches the vertex count that is consistent across sessions.

    }
}

Since the UV coordinates are also constant across sessions, i am trying to draw for each pixel that is over the face mesh its corresponding position in the UV texture so i can get, after some iterations, a persons face texture to a file.

I have come to some theorical solutions, like creating CGPaths for each triangle, and ask for each pixel if it is contained in that triangle and if it is, create a triangular image, cropping a rectangle and then applying a triangle mask obtained from the points projected by the triangle vertices in the image coordinates, so in this fashion i can obtain a triangular image that has to be translated to the underlying triangle transform (like skewing it in place), and then, in a UIView (1024x1024) add each triangle image as UIImageView as a sub view, and finally encode that UIView as PNG, this sounds like a lot of work, specifically the part of matching the cropped triangle with the UV texture corresponding triangle.

In the Apple demo project there is an image that shows how that UV texture looks like, if you edit this image and add some colors it will then show up in the face, but i need the other way around, from what i am seeing in the camera feed, create a texture of your face, in the same demo project there is an example that does exactly what i need but with a shader, and with no clues on how to extract the texture to a file, the shader codes looks like this:

/*
 <samplecode>
 <abstract>
 SceneKit shader (geometry) modifier for texture mapping ARKit camera video onto the face.
 </abstract>
 </samplecode>
 */

#pragma arguments
float4x4 displayTransform // from ARFrame.displayTransform(for:viewportSize:)

#pragma body

// Transform the vertex to the camera coordinate system.
float4 vertexCamera = scn_node.modelViewTransform * _geometry.position;

// Camera projection and perspective divide to get normalized viewport coordinates (clip space).
float4 vertexClipSpace = scn_frame.projectionTransform * vertexCamera;
vertexClipSpace /= vertexClipSpace.w;

// XY in clip space is [-1,1]x[-1,1], so adjust to UV texture coordinates: [0,1]x[0,1].
// Image coordinates are Y-flipped (upper-left origin).
float4 vertexImageSpace = float4(vertexClipSpace.xy * 0.5 + 0.5, 0.0, 1.0);
vertexImageSpace.y = 1.0 - vertexImageSpace.y;

// Apply ARKit's display transform (device orientation * front-facing camera flip).
float4 transformedVertex = displayTransform * vertexImageSpace;

// Output as texture coordinates for use in later rendering stages.
_geometry.texcoords[0] = transformedVertex.xy;

/**
 * MARK: Post-process special effects
 */

Honestly i do not have much experience with shaders, so any help would be appreciated in translating the shader info on how to translate to a more Cocoa Touch Swift code, right now i am not thinking yet in performance, so it if has to be done in the CPU like in a background thread or offline is ok, anyway i will have to choose the right frames to avoid skewed samples, or triangles with very good information and some other with barely a few pixels stretched (like checking if the normal of the triangle is pointing to the camera, sample it), or other UI helpers to make the user turns the face to sample all the face correctly.

I have already checked this post and this post but cannot get it to work.

This app does exactly what i need, but they do not seem like using ARKit.

Thanks.

Juan Boero
  • 6,281
  • 1
  • 44
  • 62
  • Can you let me know how to get `a persons face texture to a file.`?? please... – MJ Studio Feb 18 '20 at 23:09
  • Hello, did you find solution on this? – swiftlearneer Aug 02 '20 at 05:40
  • The example you shared doesn't actually get a full face texture, it just textures a mesh with the exact camera frame, by reprojecting the 3d face coordinates to 2D. You can try writing a shader that deforms the face and you'll see it break down, I've found limitations using this in a project because it's not a perfect 3D mesh. So, the solution might involve some more complicated algorithms to infer a full fidelity face texture. Commenting in case someone has any leads on that. – Dennis L Jan 28 '21 at 18:24
  • If you want to get the face texture into a file from the demo project, you're just talking about getting the camera texture, so you can just snapshot the ARSCNView to get the camera texture, but I don't think that's your actual goal. – Dennis L Jan 28 '21 at 18:26
  • Does this answer your question? [iOS11 ARKit: Can ARKit also capture the Texture of the user's face?](https://stackoverflow.com/questions/47225280/ios11-arkit-can-arkit-also-capture-the-texture-of-the-users-face) – Matt Bierner Apr 12 '21 at 07:05

0 Answers0