Model on a detected plane
CGPoint
is a XY-point on iPhone's screen, so there's no need to convert it to XYZ-point. If you need a 3D point on a detected plane for accomodating your model, all you have to do is to generate an object from ARRaycastResult
(or ARHitTestResult
in case you're using ARKit+SceneKit).
import RealityKit
import ARKit
var arView = ARView(frame: .zero)
@objc func tappingScreen(_ sender: UITapGestureRecognizer) {
let results: [ARRaycastResult] = arView.raycast(from: arView.center,
allowing: .estimatedPlane,
alignment: .horizontal)
if let result: ARRaycastResult = results.first {
let model = ModelEntity(mesh: .generateSphere(radius: 0.02))
let anchor = AnchorEntity(world: result.worldTransform)
anchor.addChild(model)
arView.scene.anchors.append(anchor)
}
}
Model at camera's position
Creating a model exactly where ARCamera is located in the current frame is super easy:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let model = ModelEntity(mesh: .generateSphere(radius: 0.02))
let anchor = AnchorEntity(world: arView.cameraTransform.matrix)
anchor.addChild(model)
model.position.z = -0.5 // 50 cm offset
arView.scene.anchors.append(anchor)
}
Also, you can multiply camera's transform by desired offset:
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.5 // 50 cm offset
// Multiplying camera matrix by offset matrix
let transform = simd_mul(arView.cameraTransform.matrix, translation)
model.transform.matrix = transform
Additional info
This post is also helpful.