1

I have created 3 "scenes" inside Experience.rcproject file, that is created when you start a new Augmented Reality project using xcode.

Working for 3D a lot, I would say that these were 3 objects inside a scene, but inside Experience.rcproject I have added 3 "scenes". Inside each one, the same 3D model. The first one is attached to an horizontal plane, the second one to a vertical plane and the third one to an image.

I am woking with Reality Kit for the first time and learning along the way.

My idea of doing so, is to load the right object when I want to have it attached to the horizontal, vertical or image.

This is how I accomplished this.

I have modified Experience.swift file provided by Apple to accept scene names, like this:

public static func loadBox(namedFile:String) throws -> Experience.Box {
    guard let realityFileURL = Foundation.Bundle(for: Experience.Box.self).url(forResource: "Experience", withExtension: "reality") else {
      throw Experience.LoadRealityFileError.fileNotFound("Experience.reality")
    }
    
    let realityFileSceneURL = realityFileURL.appendingPathComponent(namedFile, isDirectory: false)
    let anchorEntity = try Experience.Box.loadAnchor(contentsOf: realityFileSceneURL)
    return createBox(from: anchorEntity)
  }

and I call this line

let entity = try! Experience.loadBox(namedFile:sceneName)

whatever I want, but I have to use this code:

// I have to keep a reference to the entity so I can remove it from its parent and nil
currentEntity?.removeFromParent()
currentEntity = nil

// I have to load the entity again, now with another name
let entity = try! Experience.loadBox(namedFile:sceneName)

// store a reference to it, so I can remove it in the future
currentEntity = entity

// remove the old one from the scene
arView.scene.anchors.removeAll()

// add the new one
arView.scene.anchors.append(entity)

This code is stupid and I am sure there is a better way.

Any thoughts?

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
Duck
  • 34,902
  • 47
  • 248
  • 470

1 Answers1

9

Hierarchy in RealityKit / Reality Composer

I think it's rather a "theoretical" question than practical. At first I should say that editing Experience file containing scenes with anchors and entities isn't good idea.

In RealityKit and Reality Composer there's quite definite hierarchy in case you created single object in default scene:

Scene –> AnchorEntity -> ModelEntity 
                              |
                           Physics
                              |
                          Animation
                              |
                            Audio
                          

If you placed two 3D models in a scene they share the same anchor:

Scene –> AnchorEntity – – – -> – – – – – – – – ->
                             |                  |
                       ModelEntity01      ModelEntity02
                             |                  |
                          Physics            Physics
                             |                  |
                         Animation          Animation
                             |                  |
                           Audio              Audio

AnchorEntity in RealityKit defines what properties of World Tracking config are running in current ARSession: horizontal/vertical plane detection and/or image detection, and/or body detection, etc.

Let's look at those parameters:

AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: [1, 1]))

AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [0.5, 0.5]))

AnchorEntity(.image(group: "Group", name: "model"))

Here you can read about Entity-Component-System paradigm.


Combining two scenes coming from Reality Composer

For this post I've prepared two scenes in Reality Composer – first scene (ConeAndBox) with a horizontal plane detection and a second scene (Sphere) with a vertical plane detection. If you combine these scenes in RealityKit into one bigger scene, you'll get two types of plane detection – horizontal and vertical.

enter image description here

Two cone and box are pinned to one anchor in this scene.

enter image description here

In RealityKit I can combine these scenes into one scene.

// Plane Detection with a Horizontal anchor
let coneAndBoxAnchor = try! Experience.loadConeAndBox()
coneAndBoxAnchor.children[0].anchor?.scale = [7, 7, 7]
coneAndBoxAnchor.goldenCone!.position.y = -0.1  //.children[0].children[0].children[0]
arView.scene.anchors.append(coneAndBoxAnchor)

coneAndBoxAnchor.name = "mySCENE"
coneAndBoxAnchor.children[0].name = "myANCHOR"
coneAndBoxAnchor.children[0].children[0].name = "myENTITIES"

print(coneAndBoxAnchor)
     
// Plane Detection with a Vertical anchor
let sphereAnchor = try! Experience.loadSphere()
sphereAnchor.steelSphere!.scale = [7, 7, 7]
arView.scene.anchors.append(sphereAnchor)

print(sphereAnchor)

enter image description here

In Xcode's console you can see ConeAndBox scene hierarchy with names given in RealityKit:

enter image description here

And you can see Sphere scene hierarchy with no names given:

enter image description here

And it's important to note that our combined scene now contains two scenes in an array. Use the following command to print this array:

print(arView.scene.anchors)

It prints:

[ 'mySCENE' : ConeAndBox, '' : Sphere ]


You can reassign a type of tracking via AnchoringComponent (instead of plane detection you can assign an image detection):

coneAndBoxAnchor.children[0].anchor!.anchoring = AnchoringComponent(.image(group: "AR Resources", 
                                                                            name: "planets"))


Retrieving entities and connecting them to new AnchorEntity

For decomposing/reassembling an hierarchical structure of your scene, you need to retrieve all entities and pin them to a single anchor. Take into consideration – tracking one anchor is less intensive task than tracking several ones. And one anchor is much more stable – in terms of the relative positions of scene models – than, for instance, 20 anchors.

let coneEntity = coneAndBoxAnchor.goldenCone!
coneEntity.position.x = -0.2
    
let boxEntity = coneAndBoxAnchor.plasticBox!
boxEntity.position.x = 0.01
    
let sphereEntity = sphereAnchor.steelSphere!
sphereEntity.position.x = 0.2
    
let anchor = AnchorEntity(.image(group: "AR Resources", name: "planets")
anchor.addChild(coneEntity)
anchor.addChild(boxEntity)
anchor.addChild(sphereEntity)
    
arView.scene.anchors.append(anchor)


Useful links

Now you have a deeper understanding of how to construct scenes and retrieve entities from those scenes. If you need other examples look at THIS POST and THIS POST.


P.S.

Additional code showing how to upload scenes from ExperienceX.rcproject:

import ARKit
import RealityKit

class ViewController: UIViewController {
    
    @IBOutlet var arView: ARView!
    
    override func viewDidLoad() {
        super.viewDidLoad()
                    
        // RC generated "loadGround()" method automatically
        let groundArrowAnchor = try! ExperienceX.loadGround()
        groundArrowAnchor.arrowFloor!.scale = [2,2,2]
        arView.scene.anchors.append(groundArrowAnchor)

        print(groundArrowAnchor)
    }
}

enter image description here

enter image description here

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
  • 1
    You answer is fantastic. Thanks. Just one question: I see that you have two separate methods, `loadConeAndBox()` and `loadSphere()`. Will those be copies of the original Apple's `loadBox()`? Excuse my stupidity but can you please post this part of your code? – Duck Jul 10 '20 at 13:21
  • 1
    Yes, I modified a default Reality Composer's "Box" scene, putting there just Cone model. – Andy Jazz Jul 10 '20 at 13:22
  • 1
    I ask this because I thought the scene names would be exposed in code in a way that could be used in the loading methods. Anyway, see my `loadBox:namedFile` above. I guess you mean that. – Duck Jul 10 '20 at 13:24
  • Thanks again. Now I have to discover why calls to this `loadBox()` are leaking in my code. But this is [another question of mine](https://stackoverflow.com/questions/62823901/leak-in-call-to-code-created-by-apple-but-where) . – Duck Jul 10 '20 at 13:26
  • @Duck, just one thing – as I've written in this post, editing `Experience` file is not a good idea... Maybe memory leak is a result of this... – Andy Jazz Jul 10 '20 at 13:33
  • so how do I load one scene or the other? – Duck Jul 10 '20 at 13:36
  • 1
    Please give me your real Reality Composer project. – Andy Jazz Jul 10 '20 at 13:38
  • 1
    [here we go](https://easyupload.io/6mqoqr)... please explain how do I load Ground and Wall and have access to ArrowGround and ArrowWall like you explained in your code without modifying Apple's original `loadBox()`? – Duck Jul 10 '20 at 13:45
  • this is what I am talking about... `loadGound()` does not exist in my code. Can you show the code you would put on your `loadGround()` to see if we are talking the same thing? thanks. – Duck Jul 10 '20 at 14:16
  • 1
    one of my problems is that I was using a copy of `Experience.swift` and that copy was not being updated, obviously, with the scene names. Now, it is working fine. Thanks. – Duck Jul 10 '20 at 16:06
  • Hi @AndyFedoroff, I was reading your post: https://stackoverflow.com/questions/62623726/what-is-the-real-focal-length-of-the-camera-used-in-realitykit/62627471#62627471. I am wondering does that mean ARKit always used focal length of 28mm (including face tracking) regardless we see it as 20mm printed in the iPhone X? – swiftlearneer Jul 16 '20 at 16:10
  • 1
    Hi @swiftlearneer, For rear camera – yes, it uses 28mm. For selfie camera – don't know, there's no any info. – Andy Jazz Jul 16 '20 at 16:48
  • @AndyFedoroff Thanks! I see! So the intrinsic matrix here only refers to the rear (back) camera and not the TruthDepth camera? Would I have to go look into AVCameraCalibrationData instead? – swiftlearneer Jul 16 '20 at 18:06
  • 1
    I'll look into it tomorrow. I'm interested in it too)) – Andy Jazz Jul 16 '20 at 18:15
  • @AndyFedoroff Thanks and that would be great! – swiftlearneer Jul 16 '20 at 18:57
  • 1
    @AndyFedoroff. If I can ask a follow up question for https://stackoverflow.com/questions/62623726/what-is-the-real-focal-length-of-the-camera-used-in-realitykit/62627471#62627471 . I tried "@IBOutlet var sceneView: ARSCNView!" "sceneView.pointOfView?.camera?.focalLength" but got an error "Consecutive declarations on a line must be separated by ';' " – swiftlearneer Jul 16 '20 at 18:58
  • @AndyFedoroff. It works now when I put inside a function and call it. However, this intrinsic parameters are always the same for the back camera even though I move it along? – swiftlearneer Jul 16 '20 at 19:55
  • 1
    To make parameters to be updated use `renderer(...)` or `session(...)` delegate methods. They work when ARSCNViewDelegate or ARSessionDelegate is implemented. https://developer.apple.com/documentation/arkit/arscnview/2865797-delegate and. https://developer.apple.com/documentation/arkit/arsession/2865614-delegate – Andy Jazz Jul 18 '20 at 02:03
  • Hi @AndyFedoroff! Did not check you comment but I will give this a try for the back camera! Do you have any luck with the True Depth camera parameters? I followed this suggestion https://stackoverflow.com/questions/62927167/swift-get-the-truthdepth-camera-parameters-for-face-tracking-in-arkit it seems that it would work for capturing photo rather than working with face tracking (capturing frame) – swiftlearneer Jul 20 '20 at 14:46
  • So I put your suggestions as a function https://imgur.com/Ps9uxlg like so; how should I merge with renderer here: https://imgur.com/8tw7WVV; ? The renderer I have is for the nodes updates. I am sorry for posting screenshot; I wish I could have a better way to contact you:( – swiftlearneer Jul 20 '20 at 15:24
  • 1
    Hi @swiftlearneer, Sorry I'm on vacation till 30th August. I have only smartphone with me, so it's a little bit inconvenient to answer posts from smartphone((. And internet speed isn't always good. Have a good time, see you later! – Andy Jazz Jul 20 '20 at 15:51
  • All good! @AndyFedoroff! Just a quick question you have this in mind, as it was for your earlier post: https://stackoverflow.com/questions/57582930/why-does-arfaceanchor-have-negative-z-position. If you know how to flip the z axis for ARFaceAnchor? I tried to figure out here https://stackoverflow.com/questions/63023016/swift-correct-the-arfaceanchor-from-left-handed-to-right-handed-coordinate-syst. Have a nice vacation:) – swiftlearneer Jul 22 '20 at 00:30
  • Hey @Duck! Could you ask you a question about `function render` in Scenekit/ARkit related question? :( – swiftlearneer Aug 05 '20 at 15:26
  • 1
    @swiftlearneer - I am still learning reality kit... never used that function. – Duck Aug 05 '20 at 15:34
  • 1
    @Duck All good! Btw following some of your posts and they are good! – swiftlearneer Aug 05 '20 at 15:35
  • 3
    @swiftlearneer - Thanks. I have to ask everything, because the documentation apple writes is written with disdain. – Duck Aug 05 '20 at 15:48
  • Hi @Duck! Quick question. Do you know how to capture scene view audio and video output of the user? I think I saw one of your posts earlier about this but not too sure if I can find it now. Currently I am using Replay kit to achieve these but has some delay in between because of the user permission... Could you please help :( – swiftlearneer Aug 18 '20 at 21:07