2

I am building a game on Unity with two Azure Kinects. How do I calibrate them to have get the positional data of body so solve occlusion?

Currently I get two bodies for each person. How can I map the 2 virtual bodies (from each camera) to each individual person?

James Z
  • 12,209
  • 10
  • 24
  • 44
Rockyzt 21
  • 21
  • 1

1 Answers1

0

Your idea is great as Multiple-camera setups offer a solution to increase the coverage of the captured human body and to minimize the occlusions.

Please go through the document: Benefits of using multiple Azure Kinect DK devices to read more on Fill in occlusions. Although the Azure Kinect DK data transformations produce a single image, the two cameras (depth and RGB) are actually a small distance apart. The offset makes occlusions possible. Use the Kinect SDK to capture the depth data from both devices and store it in separate matrices. Align the two matrices using a 3D registration algorithm. This will help you to map the data from one device to the other, taking into account the relative position and orientation of each device.

Multiple Azure Kinect DK Cameras

Please refer to this article published by: Nadav Eichler

Spatio-Temporal Calibration of Multiple Kinect Cameras Using 3D Human Pose

Quoted:

When using multiple cameras, two main requirements must be fulfilled in order to fuse the data across cameras:

  1. Camera Synchronization (alignment between the cameras’ clocks).
  2. Multi-Camera Calibration (calculating the mapping between cameras’ coordinate systems).

enter image description here

SatishBoddu
  • 752
  • 6
  • 13
  • How do you synchronise the cameras to do this? Are there any docs to do this in c sharp because we need to use it in Unity? – Rockyzt 21 Feb 02 '23 at 17:29
  • Regarding the issue with getting two bodies for each person, you can use the body tracking API to track the body joints and map them to the corresponding person. You can use the body index map to identify the body joints of each person. Once you have identified the body joints, you can use the k4a_calibration_2d_to_2d() function to map the 2D coordinates of the body joints from each camera to a common coordinate system. – SatishBoddu Feb 07 '23 at 22:54