14

There are many tools online that take images and simulate what that image might look like to someone with color blindness. However, I can't find any descriptions of these algorithms.

Is there a standard algorithm used to simulate color blindness? I'm aware that there are many types of color blindness (see the Wikipedia page on the subject for more details), but I'm primarily interested in algorithms for simulating dichromacy.

GEOCHET
  • 21,119
  • 15
  • 74
  • 98
templatetypedef
  • 362,284
  • 104
  • 897
  • 1,065

3 Answers3

3

I had the same frustration and wrote an article comparing opensource color blindness simulations. In short, there are four main algorithms:

  1. Coblis and the "HCIRN Color Blind Simulation function". You'll find this one in many places, and a Javascript implementation by MaPePeR. The full HCIRN simulation function was not properly evaluated but is reasonable in practice. However the "ColorMatrix" approximation by colorjack is very inaccurate and should be totally avoided (the author himself said that). Unfortunately it's still widespread as it was easy to copy/paste.

  2. "Computerized simulation of color appearance for dichromats" by Brettel, Viénot, and Mollon (1997). A very solid reference. Works for all kinds of dichromacies. I wrote a public domain C implementation in libDaltonLens.

  3. "Digital video colourmaps for checking the legibility of displays by dichromats" by Viénot, Brettel and Mollon (1999). A solid reference too, simplifies the 1997 paper for protanopia and deuteranopia (2 of the 3 kinds of color blindness). Also in libDaltonLens.

  4. "A Physiologically-based Model for Simulation of Color Vision Deficiency" by Machado et al. (2009). Precomputed matrices are available on their website, which makes it easy to implement yourself. You just need to add the conversion from sRGB to linearRGB.

nburrus
  • 114
  • 1
  • 3
0

Looks like you're answer is in the wikipedia entry you linked.

For example:

Protanopia (1% of males): Lacking the long-wavelength sensitive retinal cones, those with this condition are unable to distinguish between colors in the green–yellow–red section of the spectrum. They have a neutral point at a greenish wavelength around 492 nm – that is, they cannot discriminate light of this wavelength from white.

So you need to de-saturate any colors in the green-yellow-red spectrum to white. Image color saturation

The other 2 types of dichromacy can be handled similarly.

Community
  • 1
  • 1
Byron Whitlock
  • 52,691
  • 28
  • 123
  • 168
  • 4
    While I like your analysis, this answer leaves a lot of key details unaccounted for. How would you determine what colors are "close" to this peak color? Given the "distance" from that color, how do you determine how much to desaturate? – templatetypedef Aug 28 '12 at 23:50
0

First we have to understand how the eye works:

  1. A regular/healthy eye has 3 types of cones and 1 type of rods that have activation functions over the visible spectrum of light.

  2. Their activations then pass through some function to produce the signal that goes to your brain. Roughly speaking, the function takes 4 channels as input and produces 3 channels as output (namely lightness, yellow-blue and red-green).

  3. A colorblind person would have one of those two things be different (afaik usually/always 1.), so for example the person would be missing one type of cone or the cone's activation would be different.

The best thing to do would be:

  1. Convert all pixels from RGB space to a combination of frequencies (with intensities). To do this, first take calculate the activations of each of the three cones (of a healthy person) then find a "natural" solution for a set of frequencies (+ intensities) that would result in the same activation. Of course, one solution is just the original three RGB frequencies with their intensities, but it is unlikely that the original image actually had that. A natural solution would be for example a normal distribution around some frequency (or even just one frequency).

  2. Then, (again for each pixel) calculate the activations of a colorblind person's cones to your combination of frequencies.

  3. Finally, find an RGB value such that a healthy person would have the same activations as the ones the colorblind person has.

Note that, if the way these activations are combined is also different for the relevant type of colorblindness, you might want to carry that out as well in the above steps. (So instead of matching activations, you are matching the result of the function over the activations).

indjev99
  • 134
  • 1
  • 13
  • 1
    basically you right but there is a big problem that we can get [RGB from wavelength](https://stackoverflow.com/a/22681410/2521214) but not the other way around. So this will only work for images with known wavelengths (PBR rendered or with ability to detect objects on image and infer the spectral composition, or multi band images) – Spektre Oct 12 '20 at 11:19