I am using Object segmentation dataset having following information:
Introduced: IROS 2012
Device: Kinect v1
Description: 111 RGBD images of stacked and occluding objects on table.
Labelling: Per-pixel segmentation into objects.
link for the page: http://www.acin.tuwien.ac.at/?id=289
I am trying to use the depth map provided by the dataset. However, it seems the depth map is completely black.
Original image for the above depth map
I tried to do some preprocessing and normalised the image so that the depth map could be visualised in the form of a gray image.
img_depth = cv2.imread("depth_map.png",-1) #depth_map.png has uint16 data type
depth_array = np.array(img_depth, dtype=np.float32)
frame = cv2.normalize(depth_array, depth_array, 0, 1, cv2.NORM_MINMAX)
cv2.imwrite('capture_depth.png',frame*255)
The result of doing this preprocessing is:
In one of the posts in stackoverflow, i read that these black patches are the regions where the depth map was not defined.
If i have to use this depth map, what is the best possible way to fill these undefined regions? (I am thinking of filling these regions with K-nearest neighbour but feel there could be better ways for this).
Are there any RGB-D datasets that do not have such problems or these kind of problems always exists? what are the best possible way to tackle such problems?
Thanks in Advance!