0

I am doing a task where I use the Canny edge detector to compute an edge image which has white pixels representing the edge, and then I will need the coordinates of the these edge pixels in the image to sent into another function.

The process of getting the coordinates of edge pixels from the edge image matrix is usually done with the cv::FindContours() of OpenCV, and the algorithm in this function is complicated and with many decisions, which is not differentiable. But now I want to use the process of turning edge image into 2d coordinates as a part in a deep learning model, so I need to have a differentiable and more straightforward process.

I couldn't find one, does anyone have any ideas? Thanks!

AllyCasa
  • 1
  • 1
  • The question is unclear to me. If you just want the edge pixels out of a canny image you could just take all pixels that are white (or binary 1 I believe). I you want connected edges you will need to use an algorithm that follows a line, deals with gaps etc., so I think there is no straightforward process for this. OpenCV uses [this](https://stackoverflow.com/questions/10427474/what-is-the-algorithm-that-opencv-uses-for-finding-contours) implementation. – Grillteller Nov 24 '21 at 09:13
  • 1
    @Grillteller The point is to have a differentiable computation. And yeah, I think there's probably no straightforward process too. – AllyCasa Nov 26 '21 at 05:45

0 Answers0