Take a look at the image below. If you're not color blind, you should see some A's and B's. There are 3 A's and 3 B's in the image, and they all have one thing in common: Their color is the background + 10% of value, saturation and hue, in that order. For most people, the center letters are very hard to see - saturation doesn't do much, it seems!
This is a bit troublesome though because I'm making some character recognition software, and I'm filtering the image based on known foreground and background colors. But sometimes these are quite close, while the image is noisy. In order to decide whether a pixel belongs to a letter or to the background, my program checks Euclidean RGB distance:
(r-fr)*(r-fr) + (g-fg)*(g-fg) + (b-fb)*(b*fb) <
(r-br)*(r-br) + (g-bg)*(g-bg) + (b-bb)*(b*bb)
This works okay, but for close backgrounds and foregrounds, it works pretty bad sometimes.
Are there some better metrics to look for? I've looked into color perception models but those mostly model brightness rather than perceptive difference which I'm looking for. Maybe one that models saturation as less effective, and certain hue differences also? Any pointers to some interesting metrics would be very useful.