Thanks for reading this question!
We have a setup as seen in this visualisation. There are 4 cameras which are calibrated with intrinsic (Camera Matrix, Distortion coefficients) and extrinsic (Rotation, Translation, Essential and Fundamental Matrices) parameters available.
The top left of the white plane is the origin of the world coordinates. With positive x going right, positive y going down and positive z going towards the object. One of the red cameras (2nd from the white plane) is the master and the other 3 are the slaves (For calibration purposes)
We want to be able to use the cameras to locate and position the objects in the scene with reference to the world coordinates (White plane). So far we are able to locate and get the 3d position of objects in the camera (Master camera) coordinates. The question is, what is the efficienct/easiest way to do the transformation from camera coordinates to the world coordinates? Will finding the normal of the projected white plane help? (Example we can do How to project a point onto a plane in 3D?) Or if the 3d position with respect to the white plane is known, can we use OpenCV api like cv2.estimateAffine3D?
Thanks!