I'm developing an AR app for iOS that lets the user place a model in the physical world, using ARKit and SceneKit.
I've looked at this Apple sample code for inspiration. In the sample project, they use tracked raycasts to position 3D models in a scene. This is close to what I want, and what led me to assume I need to do the same to achieve the most accurate positioning.
However, when I use a tracked raycast to position my model, the model drifts around the scene a lot as ARKit updates the position of the raycast.
I get much more stable positioning when using a non-tracked raycast. That makes me ask: what actually is the intended use case for a tracked raycast? Am I understanding this API wrong?
I've tried:
- Positioning the model using an image anchor. This is very stable.
- Positioning the model using a non-tracked raycast. This is about as stable as the image anchor.
- Positioning the model using a tracked raycast. This drifts all over the scene.
I also understand what an AR raycast in general is for: getting the intersection of a 2D point on the screen with the 3D geometries that ARKit is tracking. As this post has explained already.