I am using the Hough Transform algorithm from scikit-image and Python to find the center of a crosshair.
Overall this produces great results, but I need to quantify exactly how well the center of the crosshair has been found. The error propagation due to having multiple lines, I can handle. However, I have not found a way to measure fit confidence (i.e. like a covariance).
This article suggests that there may be ways to quantify this, but I have not read it in its entirety. It also suggests smoothing and interpolating the hough space to better locate the maximum of the of peak.
One can also consider fitting a curve (i.e. 2D gaussian or lorentzian) to the hough space of an image, because even a bad covariance is a covariance that can be reported on a scientific document. If this is the correct approach it is still unclear which function one should use to fit data in the hough space. A perfect line becomes a sinusoid, but the accumulation of these sinusoids gives rise to the peak.
Do you know how to quantify the accuracy of a Hough transform? If so how? Would you recommend a different technique for this issue?(hopefully not)