11

ARKit app allows us to create an ARReferenceObject, and using it, we can reliably recognize the position and orientation of the real-world objects. But also we can save the finished .arobject file.

enter image description here

However, ARReferenceObject contains only the spatial features information needed for ARKit to recognize the real-world object, and is not a displayable 3D reconstruction of that object.

func createReferenceObject(transform: simd_float4x4, 
                              center: simd_float3, 
                              extent: simd_float3, 
                   completionHandler: (ARReferenceObject?, Error?) -> Void)

My question:

Is there a method that allows us to reconstruct digital 3D geometry (low-poly or high-poly) from the .arobject file using Poisson Surface Reconstruction or Photogrammetry?

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220

2 Answers2

8

RealityKit 2.0 | Object Capture API

Object Capture API, announced at WWDC 2021, provides you with the long-awaited tools for photogrammetry. At the output we get USDZ model with a hi-res texture.

Read about photogrammetry HERE.

ARKit | Mesh Reconstruction

Using iOS device with LiDAR and ARKit 3.5/4.0/5.0 you can easily reconstruct a topological map of surrounding environment. Scene Reconstruction feature starts working immediately after launching a current ARSession.

Apple LiDAR works within 5 meters range. A scanner can help you improve a quality of ZDepth channel, and such features as People/Real World Objects Occlusion, Motion Tracking, Immediate Physics Contact Body and Raycasting.

Other awesome peculiarities of LiDAR scanner are:

  • you can use your device in a poorly lit room
  • you can track a pure white walls with no features at all
  • you can detect a planes almost instantaneously

Consider that a quality of a scanned object when you're using LiDAR isn't as good as you expect. Small details are not scanned. That's because a resolution of an Apple LiDAR isn't high enough.

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
  • 1
    Can you do that with the front-facing truedepth camera (no LIDAR, but has depth data)? Which APIs in ARKit 3.5 are you using to reconstruct a mesh? – Luther Aug 20 '20 at 21:41
  • Hi @Luther, technically you can definitely use iPhone's TrueDepth camera, but I've seen no robust examples showing scene reconstruction feature using TrueDepth sensor (seemingly because its working distance is approximately 0.1...1.0 m). For any additional info on LiDAR, please read my story on Medium (a link is in my answer). – Andy Jazz Sep 12 '20 at 19:55
  • 1
    Is there any working sample or code example to create a 3D mesh from scanning a real object using LiDAR and ARKit 4.0? – Vidhya Sri Apr 12 '21 at 05:47
  • @AndyFedoroff you said it's possible to export scanned object as .obj. Any examples of this? Or links? – grantespo May 19 '21 at 20:25
  • 1
    Hi @grantespo. https://stackoverflow.com/questions/61063571/arkit-3-5-how-to-export-obj-from-new-ipad-pro-with-lidar/61104855#61104855 – Andy Jazz May 19 '21 at 20:27
  • @AndyFedoroff I noticed your question (this post) uses the "Scanning Real World Object" sample app, but the answer you linked uses the "Visualising Scene Scemantics sample app". Can the concepts in that answer still be applied to the "Scanning Real World Object" sample app? (e.g: retrieving ARMeshGeometry object from the first anchor in the frame.) – grantespo May 19 '21 at 21:03
  • @grantespo, Let's wait for WWDC 2021. I am sure that new features of the LiDAR API will be presented. – Andy Jazz May 19 '21 at 21:45
7

You answered your own question with a quote from Apple's documentation:

An ARReferenceObject contains only the spatial feature information needed for ARKit to recognize the real-world object, and is not a displayable 3D reconstruction of that object.

If you run that sample code, you can see for yourself the visualizations it creates of the reference object during scanning and after a test recognition — it's just a sparse 3D point cloud. There's certainly no photogrammetry in what Apple's API provides you, and there'd not much to go on in terms of recovering realistic structure in a mesh.

That's not to say that such efforts are impossible — there have been some third parties demoing Here photogrammetry experiments based on top of ARKit. But

1. that's not using ARKit 2 object scanning, just the raw pixel buffer and feature points from ARFrame.

2. the level of extrapolation in those demos would require non-trivial original R&D, as it's far beyond the kind of information ARKit itself supplies.

Abhishek Thapliyal
  • 3,497
  • 6
  • 30
  • 69
rickster
  • 124,678
  • 26
  • 272
  • 326