I would like to create a main AR camera (cameraMode: .ar) and a second picture-in-picture virtual camera (cameraMode: .nonAR) from an orthogonal view in RealityKit. The goal of the second camera is to visualize model entities from a different perspective in the same scene without the camera feed.
While I do not have experience with SceneKit, it appears that in non-AR scenes this can be done with two different cameras by changing the pointOfView property of the second camera. See stack overflow question and answer.
I cannot find a similar property in RealityKit. Is this possible in RealityKit? Has anyone tried this using SceneKit as the renderer as opposed to RealityKit?
Without changing the position of the nonAR camera, I have tried to create the two views described above in SwiftUI, but receive multiple errors:
-[MTLTextureDescriptorInternal validateWithDevice:]:1248: failed assertion `Texture Descriptor Validation MTLTextureDescriptor has width (4294967295) greater than the maximum allowed size of 16384. MTLTextureDescriptor has height (4294967295) greater than the maximum allowed size of 16384. MTLTextureDescriptor has invalid pixelFormat (0).
Sample code below:
import SwiftUI
import RealityKit
let arViewOne: ARView = ARView(frame: .zero, cameraMode: .ar, automaticallyConfigureSession: true)
let arViewTwo: ARView = ARView(frame: .zero, cameraMode: .nonAR, automaticallyConfigureSession: true)
let boxAnchor = try! Experience.loadBox()
struct ContentView : View {
var body: some View {
HStack {
ARViewContainerOne().edgesIgnoringSafeArea(.all)
ARViewContainerTwo().edgesIgnoringSafeArea(.all)
}
}
}
struct ARViewContainerOne: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
arViewOne.scene.anchors.append(boxAnchor)
return arViewOne
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
struct ARViewContainerTwo: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
return arViewTwo
}
func updateUIView(_ uiView: ARView, context: Context) {}
}