I'm making a program that needs to detect a constellation of stars in a picture of the nightsky, but the algorithm I made isn't good enough.
For each constellation, I arbitrarily chose a reference side (of two stars), and saved every other star from the constellation with polar coordinates relative to the reference side. Then, I used a scoring function (something like mean distance squared) to find the most matching star in the given picture for every star in each constellation, and chose the constellation with the best results overall.
The problem is the success rate isn't good enough. When a picture contains too much stars it sometimes finds a different constellation made of other stars in the picture. The constellation the program finds looks like that constellation, even though it is made of different stars.
I want to use a better algorithm to prevent this, without using machine learning. are there any of these? A better scoring function could also help. TIA :)