2

In RealityKit there is the default EntityTranslationGestureRecognizer which you can install to Entities to allow dragging them along their anchoring plane. In my use-case, I will only allow moving one selected entity at a time. As such, I would like to enable the user to drag the selected entity even while it is behind another entity from the POV of the camera.

I have tried setting a delegate to the EntityTranslationGestureRecognizer and implementing the function gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer,shouldReceive touch: UITouch) -> Bool, but the gesture recognizer still does not receive the touch when another entity is in front.

My assumption is that behind the scenes it is doing a HitTest, and possibly only considering the first Entity that is hit. I'm not sure if that is correct though. Were that the case, ideally there would be some way to set a CollisionMask or something on the hit test that the translation gesture is doing, but I have not found anything of the sort.

Do I just need to re-implement the entire behavior myself with a normal UIPanGestureRecognizer ?

Thanks for any suggestions.

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
JBKKNOWL
  • 21
  • 2

1 Answers1

1

Hypersimple solution

The easiest way to control a model with RealityKit's transform gestures, even if it's occluded by another model, is to assign a collision shape only for the controlled model.

modelOne.generateCollisionShapes(recursive: false)

arView.installGestures(.translation, for: modelOne as! (Entity & HasCollision))

Advanced solution

However, if both models have collision shapes, the solution should be as follows. This example implements EntityTranslationGestureRecognizer, TapGesture, CollisionCastHit collection, EntityScaleGestureRecognizer and collision masks.

Click to play GIF's animation.

enter image description here

I've implemented SwiftUI 2D tap gesture to deactivate cube's collision shape in a special way. TapGesture() calls the raycasting method, which fires a 3D ray from the center of the screen. If the ray does not hit any model with a required collision mask, then "Raycasted" string does not appear on the screen, therefore you will not be able to use the RealityKit's drag gesture for model translation.

import RealityKit
import SwiftUI
import ARKit
import PlaygroundSupport     // iPadOS Swift Playgrounds app version

struct ContentView: View {
    
    @State private var arView = ARView(frame: .zero)
    @State var mask1 = CollisionGroup(rawValue: 1 << 0)
    @State var mask2 = CollisionGroup(rawValue: 1 << 1)
    @State var text: String = ""
    
    var body: some View {
        ZStack {
            ARContainer(arView: $arView, mask1: $mask1, mask2: $mask2)
                .gesture(
                    TapGesture().onEnded { raycasting() }
                )
            Text(text).font(.largeTitle)
        }
    }
    func raycasting() {            
        let ray = arView.ray(through: arView.center)           
        let castHits = arView.scene.raycast(origin: ray?.origin ?? [], 
                                         direction: ray?.direction ?? [])

        for result in castHits {
            if (result.entity as! Entity & HasCollision)
                                            .collision?.filter.mask == mask1 {
                text = "Raycasted"
            } else {
                (result.entity as! ModelEntity).model?.materials[0] = 
                           UnlitMaterial(color: .green.withAlphaComponent(0.7))
                (result.entity as! Entity & HasCollision).collision = nil
            }
        }
    }
}

struct ARContainer: UIViewRepresentable {
    
    @Binding var arView: ARView
    @Binding var mask1: CollisionGroup
    @Binding var mask2: CollisionGroup
    
    func makeUIView(context: Context) -> ARView {
        arView.cameraMode = .ar
        arView.renderOptions = [.disablePersonOcclusion, .disableDepthOfField]
        
        let model1 = ModelEntity(mesh: .generateSphere(radius: 0.2))
        model1.generateCollisionShapes(recursive: false)
        model1.collision?.filter.mask = mask1
        
        let model2 = ModelEntity(mesh: .generateBox(size: 0.2), 
                            materials: [UnlitMaterial(color: .green)])
        model2.position.z = 0.4
        model2.generateCollisionShapes(recursive: false)
        model2.collision?.filter.mask = mask2
        
        let anchor = AnchorEntity(world: [0,0,-1])
        anchor.addChild(model1)
        anchor.addChild(model2)
        arView.scene.anchors.append(anchor)
        
        arView.installGestures(.translation, 
                                for: model1 as! (Entity & HasCollision))
        arView.installGestures(.scale, 
                                for: model2 as! (Entity & HasCollision))
        return arView
    }   
    func updateUIView(_ view: ARView, context: Context) { }
}

PlaygroundPage.current.needsIndefiniteExecution = true
PlaygroundPage.current.setLiveView(ContentView())
Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
  • Hey Andy, thanks for your comment. Unfortunately, neither of these approaches will work for me as I need to retain the active collision component to enable other behaviors (physics, receiving taps on the other models via raycasting, etc). Shame that they don't allow us to set a collision filter for the gestures. Thanks anyways ! – JBKKNOWL Feb 13 '23 at 22:53