So I have been struggling with a complex ( 10+ children ) scene that I created in Reality Composer and tested but REALLY struggled when bringing it to Reality Kit.
What I want to do is, after the scene is imported, and all the actions on each model are implemented, I want to be able to drag the all the models at once ( so far so good, thus far I was able to do )
AND
and once the dragging ends, I want it to return to the initial state, where I can click on each entity separately and each animation plays.
BUT This is the part that doesn't work. What frustrates me is, I was able to do all of this so easily using Reality Composer, and it worked so well when I tested it... but when I bring this into realitykit... IT'S A MESS!!!
What complicates the issue is, in order to drag all the elements at once, I had to create a separate parent entity which groups all the children. This works to drag the entire group, but I when I want to turn back to the initial situation, it doesn't work. If I try to remove the group entity, the entire scene disappears.
I did this hacky version where the model starts as non-dragging and after activating the animations, it turns into a draggable model. But this is sadly a partial solution. It doesn't let me go back.
func makeUIView(context: Context) -> ARView {
ExperienceFullB.loadBoxAsync(completion: { result in
do {
let boxScene = try result.get()
arView.scene.anchors.append(boxScene)
boxScene.actions.behavior.onAction = handleTapOnEntity(_:)
func handleTapOnEntity(_ entity: Entity?) {
guard let entity = entity else { return }
//redModel.setParent(cubesAnchor)
print("hello")
let redModel = boxScene.children[0].children[0]
let group = ModelEntity() as ModelEntity & HasCollision
// boxScene.addChild(redModelPhysics)
group.addChild(redModel)
group.generateCollisionShapes(recursive: false)
self.arView.installGestures(.all, for: group)
let shape = ShapeResource.generateBox(width: 1, height: 1, depth: 1)
let collision = CollisionComponent(shapes: [shape],
mode: .trigger,
filter: .sensor)
group.components.set(collision)
let anchor = AnchorEntity()
anchor.addChild(group)
anchor.scale = [5,5,5]
arView.scene.anchors.append(anchor)
//print(cubesAnchor)
print(boxScene)
}
} catch {
print("Error: \(error.localizedDescription)")
}
})
return arView
}
These are the rabbit holes I went through
Drag Gesture Recognizer with SwiftUI - not applicable...
Plus I am very confused about the dragging gesture. I have come across multiple examples of swiftui using UIView but I don't think this works on ARKit/RealityKit.
pan gesture recognizer from UIKit doesn't work either I found an example using UIKit from an Apple sample code, where the entity is created from code ( not imported from Reality Composer ), and it's only ONE element that you can drag and it changes by using the hasPhysics mode, from static to kinetic and so forth. Like this example answered by Andy Jazz Here's the part of the code I tried to adapt to use
// MARK: - Gestures -
/// Sets up the pan gesture recognizer used to move entities
fileprivate func setGestures() {
let panGesture = UIPanGestureRecognizer(target: self, action: #selector(panned(_:)))
panGesture.delegate = self
arView.addGestureRecognizer(panGesture)
}
/// Delegate method called by the pan gesture recognizer
@objc
func panned(_ sender: UIPanGestureRecognizer) {
switch sender.state {
case .ended, .cancelled, .failed:
// When a pan gesture ends for any reason, all entities will
// return to being dynamic physics bodies
movableEntities.flatMap { $0 }.forEach { $0.setPhysicsBodyMode(to: .dynamic) }
// Have VoiceOver announce when dragging has ended.
if #available(iOS 14.0, *) {
let announce = "Dragging ended."
UIAccessibility.post(notification: .announcement, argument: announce)
}
default:
return
}
}
/// Needed for the sample project’s custom pan gesture recognizer to work. RealityKit installs its own
/// gesture recognizer (EntityGestureRecognizers.) behind the scenes. This project’s gesture recognizer
/// needs to co-exist and run simultaneously with that one. Returning true allows them to both run together.
func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer,
shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool {
true
}
func gestureRecognizerShouldBegin(_ gestureRecognizer: UIGestureRecognizer) -> Bool {
// Turn the gesture recognizer's associated entity into a kinematic
//physics body, which allows the user to manipulate its position.
guard let translationGesture = gestureRecognizer as? EntityTranslationGestureRecognizer,
let entity = translationGesture.entity as? MovableEntity else { return true }
entity.physicsBody?.mode = .static
The complication is the parent entity, again
that was added and I don't know how to remove it correctly. I have used
redModel.setParent(parentTwo)
but if I create a new 'dad' the action-clickable gestures I created using Reality Composer don't return. HEEEEELLLLPPPPPPPP!!!!