I'm new to EmguCV, OpenCV and machine vision in general. I translated the code from this stack overflow question from C++ to C#. I also copied their sample image to help myself understand if the code is working as expected or not.
Mat map = CvInvoke.Imread("C:/Users/Cindy/Desktop/coffee_mug.png", Emgu.CV.CvEnum.LoadImageType.AnyColor | Emgu.CV.CvEnum.LoadImageType.AnyDepth);
CvInvoke.Imshow("window", map);
Image<Gray, Byte> imageGray = map.ToImage<Gray, Byte>();
double min = 0, max = 0;
int[] minIndex = new int[5], maxIndex = new int[5];
CvInvoke.MinMaxIdx(imageGray, out min, out max, minIndex, maxIndex, null);
imageGray -= min;
Mat adjMap = new Mat();
CvInvoke.ConvertScaleAbs(imageGray, adjMap, 255/(max-min), 0);
CvInvoke.Imshow("Out", adjMap);
Original Image:
After Processing:
This doesn't look like a depth map to me, it just looks like a slightly modified grayscale image, so I'm curious where I went wrong in my code. MinMaxIdx() doesn't work without converting the image to grayscale first, unlike the code I linked above. Ultimately, what I'd like to do is to be able to generate relative depth maps from a single webcamera.