3

Let's say for example, I want to anchor a cloud above the user's head.

I know that AnchorEntity exists and that you can get a reference to the user's head with AnchorEntity(.head). But how do I actually use it? With this code I am not seeing anything at all.

import SwiftUI
import RealityKit

struct CloudSpace: View {
    
    let headAnchor = AnchorEntity(.head)
    
    var body: some View {
        RealityView { content in
            async let cloud = ModelEntity(named: "Cloud")
            do {
                content.add(headAnchor)
                let cloud = try await cloud
                headAnchor.addChild(cloud)
            } catch {
                print(error)
            }
        }
    }
}
Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
swiftyboi
  • 2,965
  • 4
  • 25
  • 52

2 Answers2

4

Anchoring a Model using Head anchor in visionOS

You need a real Vision Pro device, or SwiftUI Canvas (Cmd+Opt+) to use the head anchor. Xcode 15 visionOS Simulator does not allow you to see AnchoringComponent.Target.head in action. At least now. To activate a human's head target (also known as camera target in iOS), try the following code. But consider the fact that head anchor's automatically updated every frame by default – use .once tracking mode to change it.

import SwiftUI
import RealityKit
import RealityKitContent

struct ContentView: View {
    var body: some View {
        VStack {
            RealityView { content in

                if let cloud = try? await Entity(named: "Scene",
                                                in: realityKitContentBundle) {

                    let anchor = AnchorEntity(.head)
                    anchor.anchoring.trackingMode = .once
                    cloud.setParent(anchor)
                    content.add(anchor)
    
                    cloud.transform.translation.y = 0.25
                    cloud.transform.translation.z = -1.0
                    anchor.name = "Head Anchor"
                    print(content)
                }
            }
        }
    }
}

enter image description here

To test anchoring process in visionOS simulator, use RealityKit's plane anchor:

let anchor = AnchorEntity(.plane(.horizontal,
                           classification: .table,
                           minimumBounds: [0.12, 0.12]))
Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
  • 1
    Thanks. It's a shame we can't simulate it. I understand hands/fingers but the head is basically just the camera. Hopefully in Beta 3 – swiftyboi Jun 27 '23 at 20:28
1

If you want to just get the position of the head, to place content appropriately for testing, then you can use the WorldTrackingProvider.queryPose(atTimestamp). It will give you the transform of the device at the TimeInterval specified.

It will mean setting up an ARKitSession with WorldTracking. It's also an "expensive" operation according to the docs, but it will at least give you the current head location:

let pose = worldInfo.queryPose(atTimestamp: CACurrentMediaTime())

https://developer.apple.com/documentation/visionos/tracking-points-in-world-space/#Track-the-device-position-in-the-world

svarrall
  • 8,545
  • 2
  • 27
  • 32