0

I have two webcams (let's call them left webcam and right webcam, respectively), that I am using as a stereo camera system. I have the following information regarding my setup:

  1. Camera intrinsics and distortion coefficients of both the webcams;
  2. Rotation and translation vectors of one of the webcams with respect to the other;
  3. Fundamental and Essential matrices;
  4. Reprojection matrices for both the webcams, so that the images from the webcams can be reprojected in such a way that the epipolar lines are horizontal.
  5. Disparity map (using StereoSGBM)

Now, let's say that using Harris detection (or some other feature-detection technique), I have acquired a list of feature points from the left webcam image. How do I get the 3-D world co-ordinates of these points? Also, how would the procedure, to do so, change if I had feature points from right webcam image?

I am using OpenCV 3.4.10 C++ on Ubuntu 18.04

  • from disparity you can compute the depth (look for rhe formula, something like focalLength*baseline/disparity). with the depth you can project the points to 3D: https://stackoverflow.com/questions/31265245/extracting-3d-coordinates-given-2d-image-points-depth-map-and-camera-calibratio/31266627#31266627 – Micka Jul 19 '20 at 16:31
  • @Micka thanks for the link. But how do I correspond a point in the left (or right) webcam image frame to the corresponding point in the disparity map? The image point (8, 11), for example, will not correspond to the point (8, 11) on the disparity map. How do I get this correspondence? – Harshit Kaushik Jul 20 '20 at 08:07
  • it should correspond to one of the rectified images and by using the disparity it will correspond to the rectified other image. That's what the disparity is. Can you show some images? – Micka Jul 20 '20 at 08:50
  • not sure about how it works with fundamental and essential matrices. If you dont have rectified images there, there might be some formula with epipolar lines, or using the fundamental matrix to warp from one image to the other. Still I think the disparity map should correspond to one of both images?!? – Micka Jul 20 '20 at 08:53
  • have a look at this one: https://stackoverflow.com/questions/36172913/opencv-depth-map-from-uncalibrated-stereo-system maybe it will help – Micka Jul 20 '20 at 08:54
  • @Micka do you mean to say that one of the rectified images (either from left webcam or from right webcam) looks like the disparity map (considering just the boundaries and not the color or the datatype)? This does not make sense to me. – Harshit Kaushik Jul 20 '20 at 14:15
  • Also, @Micka, the links that you have provided me with, do teach me something new (thank you for that), but they still fail to answer my question. – Harshit Kaushik Jul 20 '20 at 14:16
  • disparity map should be aligned for one of both images. The disparity tells you how far from that pixel you have to go to reach the other image's pixel (that's the definition of disparity). In rectified images the disparity is an x-axis pixel movement. Once you know how to calculate the depth from disparity, your question is answered, isn't it? Maybe just add some images (inputs & disparity) please, so we can find out whether we are talking about the same things. – Micka Jul 20 '20 at 14:52
  • @Micka [here](https://imgur.com/a/098qHEy) are the images that might help. Also, I know it's a very untidy disparity map. Also, I noticed that some columns on the left of the disparity map are completely blank. So, it means that i cannot directly corresponde a point on left image to the same row,column point in the disparity map. – Harshit Kaushik Jul 20 '20 at 15:30
  • ok, so your question is about how to get 3D points for 2d pixel positions where you dont have disparity values, but you do have pixel correspondences in both images (like you know that feature A in one image is feature B in the other image)? Then the answer is: shoot a ray through each of the pixels and test where both rays come most close to each other (in theory they would even hit somewhere, but only in theory). – Micka Jul 20 '20 at 16:34
  • No, you are getting me wrong. My question is, how do I get the 3D world co-ordinates of a specific point using the disparity map? The solution is simple, if i can get the depth from the disparity map, but the problem is where to look for the point in the disparity map. It is clear from the images that one cannot directly jump to the same row,col position in the disparity map as that of the image point. – Harshit Kaushik Jul 20 '20 at 17:17
  • how do you tell that the disparity map isn't aligned to one of the images? For me it looks like it is aligned to rectified left image. You can see from the left side of the cup and the corner of the furniture in the background and from the left side of the mouse. But the quality of the disparity map looks awful... very noisy? – Micka Jul 21 '20 at 09:51
  • @Micka the image size for the left webcam is same as that of the disparity map (640 X 480), and a large left portion of the disparity map is completely black. So, if the large black portion (which seems to be of no use to us) of the disparity map is removed, we are left with an image whose size is less than the left webcam image. This is why is said that the disparity map is not completely aligned with any of the images, and that this misalignmet is the reason why I cannot simply jump to the same row,col position in the disparity map as that of the image point. – Harshit Kaushik Jul 21 '20 at 09:59
  • just keep the black part, where disparity could not be computed. – Micka Jul 21 '20 at 10:37

0 Answers0