Assume that you have a square shaped and red coloured paper with a length of 5 centimetres.
- I can detect it's (good enough) size (bounding box) in pixels on the image that camera takes.
- I know the physical size.
On the other hand,
- I do not know how to use the camera image's pixel size in the formula with the actual physical size of the paper. To do this, I would perhaps need a constant that will come from the specifications of the camera.
- I believe mobile each iOS model has different calibration (and probably even the same models may be different than each other slightly?).
I think, if somehow I can get that information from the camera, I can map certain things and use the ratio to find the distance to a specific object.
- Do you see any problems in the above ideas?
- What would be the best way to get that constant from the camera? iOS Camera API? Announced specifications of the lens? Individually measuring and comparing size of the box on the image at the same camera distance on different iOS device models and saving those values per model type?
If all of these do not make any sense, what would you recommend me to look into? I appreciate the time taken by you to read this question and comment on it.