2

Given an image of a billiard ball, I want to be able to tell what ball it is. The colors for the balls are reasonably known and don't often deviate from a few known exceptions. But what's the best way recognize a particular ball?

My approach is, given a picture that we know contains nothing but a billiard ball (no background, etc), is to map each pixel to a known color that we expect for billiard balls. Ideally, this mapping would only ever have white (all balls have some white), black (even less black, for the writing, except for the 8 ball), and the color of the ball that determines its number. So for the 2 ball, I would expect maybe 90% red, 7% white and 3% black, give or take. The approach has had some moderate success.

The problem is how to properly map the color of a pixel to one of the known colors. Hue alone works well for most, but for some (pink 4 + red 3), the hue can be the same, but with different saturation.

I feel I could possibly come up with my own set of rules for this, and tweak it for some time, but I wanted to see if there was any better option available. I've tried an implementation of CIE76, but it has had disappointing results. I'd like to not have to role my own implementation of a known algorithm, but can if there is a good explanation that is programmer friendly.

Advice, thoughts, or resources?

Rollie
  • 4,391
  • 3
  • 33
  • 55
  • It's probably a good idea to look into some good vision libraries. – DarkAngel Oct 06 '16 at 09:32
  • I've tried opencv a bit, but it's hard to work with, especially in .net – Rollie Oct 06 '16 at 09:51
  • See [here](http://stackoverflow.com/questions/27374550/how-to-compare-color-object-and-get-closest-color-in-an-color/27375621?s=1|1.0441#27375621) for a few ideas about color distance – TaW Oct 06 '16 at 09:56
  • All tried actually - I've done a lot of googling before asking here – Rollie Oct 06 '16 at 10:16
  • I would start with this: [RGB value base color name](http://stackoverflow.com/a/37476754/2521214) and use approach/color space which most accurately transforms sample images ... use as base color table the colors of balls, and table. – Spektre Oct 06 '16 at 11:08
  • googling for images I don't see how either pink and red look alike nor how pink is 4 and red is 2. But I also see that the colors do vary quite a bit, so I think you'll need to train the numbers to use with your data. Will you work with only one or with varying sets of balls? if it is one you may want to post an example or two, maybe the red vs pink balls. – TaW Oct 06 '16 at 11:41
  • why exactly did your CIE76 approach fail? how many samples did you have in each colour class? I assume you had more than just one of an ideal colour that you calculated the DeltaE for? – Piglet Oct 06 '16 at 13:50
  • Ähm, the two-ball is mostly blue, not red ... – EluciusFTW Oct 06 '16 at 21:16
  • @Piglet I did actually just have one color for most balls; I had thought that CIE76 didn't work well for different illuminations, so I didn't pursue. If I take ~8 samples per ball color, could I expect CIE76 to do well with various lightings? – Rollie Oct 07 '16 at 01:54
  • @Rollie CIE76 just defines a measure of "distance between colours" you could also use the Euclidean distance in RGB space or something else. Maybe both? I think the important thing is that you have multiple samples for each colour and that you include more information than just Hue. – Piglet Oct 07 '16 at 08:48

1 Answers1

0

Per comments: DeltaE (CIE76 and variations) works much better with a larger sample size. My original thought was that I would find the 'true' colors of the balls, and try to use that rather than have lots of samples that vary under lighting, but that proved ineffective, although maybe if the lighting itself could be accounted for, it could be possible.

In the end I did find a c# library called Colourful, which is available on nuget, that was pretty easy to use and seemed well written. The RGB -> Lab transform is expensive, so preprocess where possible.

Rollie
  • 4,391
  • 3
  • 33
  • 55