I will be using the values produced by this function as a ground-truth labels for a computer vision task, where I will train a model using simulation data and test it using ArUco's real world ones.
I calibrated my smartphone camera with a chess board and got a reprojection error of 0.95 (I wasn't able to get less than this, tried all the possible options like ArUco and ChAruco, captured tens of images, filtered the bad ones, and none has improved the error) I read somewhere that this is expected from smartphones, if not, please let me know.
Now I placed my ArUco marker somewhere in the environment and captured images of it, produced its pose relative to the camera using estimatePoseSingleMarker
.
Upon drawing the axis, everything looks perfect and it's accurate. However, when I compare the pose values with the ones generated from simulating the same environment with the same object and camera. The values are actually quite different, especially the z value.
I'm 100% sure that my simulation environment has no error, so I presume this gap is caused by ArUco.
Do you suggest any solution? how can I predict the error of ArUco? Is there any other possible solution to collected ground-truth labels?