I am following the guide from https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_truedepth_camera. It uses AVCaptureDepthDataOutput()
to stream depth data from the front camera.
But, two of the edges have wrong data. In portrait mode it would be upper and right edge. In landscape rotated counter-clockwise, it would be left and upper side. Here's an example of how it looks. This is from the tutorial, but I modified a code a little. Note I did print out the actual distance as a number to make sure it wasn't a bug with the video/colors rendering. https://drive.google.com/file/d/1AGFHAZypmHz9136T02ufedz-X3Ct94Kq/view?usp=sharing
I'm okay with this if it's just a necessary artifact of translating from 3D to 2D data. But what I'm looking for is the "correct" way to crop the image so that the 2D depth buffer doesn't have these artifacts at the edges. How do I know how far in to crop? Do I just experiment with it? If so, how would I know it works the same across all devices?
Another issue is that if an object gets too close to the camera, the distance becomes interpreted as the maximum distance instead of minimum. Is this expected?