3

In order to check if two colors in grayscale will be too close to be distinguished by the human eye. I want to be able to generate a warning to the user if 'dangerous' colors are picked. Hence based on the result we can decide if for people with bad eye sight we should change one of the two colors to white or black to enhance the readable contrast. For example the Hex colours #9d5fb0 (purple) and #318261 (green) will turn into almost the same grey tone. Seen in HSB the B value is just 1% different from the other and therefor the healthy human eye cannot really see the difference. Or for the same the 8-Bit K value in this case differs 2%.

I have learned the luminance method is the more sophisticated way for judging grey tones the way the human eye sees colors. Yet how to do this programatically is beyond my current understanding. I could write it either PHP or JS once I understand the math.

In order to either pick values from CSS, from a screen pixel or from a file as image object, I guess we should always handle the input as RGB right?

something like:

$result = grayScaleDifference('#9d5fb0','#318261');

or

$result = 8bitK_difference('#9d5fb0','#318261');

or

$result = luminanceDifference('#9d5fb0','#318261');

So what is the best script style formula to compare them without changing or converting the actual image or color objects?

2x2p
  • 414
  • 3
  • 17
  • Grays have the same r/g/b values, so convert both colours to gray and check if `|red1 - red2|` (or green or blue) is within some tolerance. – James May 18 '19 at 12:33
  • Humans can only detect 256 discrete shades of gray, so maybe convert each color to grayscale and map them to a 0-255 range, then see if the difference between them is > 1 (or maybe 5 for a larger differences because you need some easy to see difference). – bob May 18 '19 at 16:26
  • @bob humans can all/any shade of grey. The question is at what point does one shade of grey have a JND *(Just Noticeable Difference)* and another shade? In normal vision, contrast threshold (JND) is about 1% for photopic vision, between 8 cd/m2 and 520 cd/m2. But the human range of vision is much greater than 8-520 as we adapt to different conditions, and have different contrast thresholds in darker or brighter conditions. continued:... – Myndex May 19 '19 at 04:38
  • @bob A computer monitor is usually limited to 1 cd/m2 and 300 cd/m2 so we can "fit" 256 shades, but just barely. Human perception is NOT linear, is follows a power curve, which is partly why gamma curves are used in 8bit image files, to increase data density in the dark regions. My point being that with human perception, we can perceive a smooth grey gradient from 0 to 255 *only if the gradient has a gamma* such that the JNDs follow a curve weighted toward dark, and not a straight line. But in the "real" world, we can see far more than 256 shades. – Myndex May 19 '19 at 04:43
  • @James correctly converting a color to grey is a multi-step process (if you want accurate results) outlined in my answer below. Also, contrast needs to be at a ratio of 3:1, or as a percentage at 70% and above to follow typical human factors guidelines (but very dependent on the math method used). – Myndex May 19 '19 at 04:49
  • @Myndex Seems like the greyscale conversion is already being done, as the question isn’t “how do I pick the right grey for colour X”. So the important part is comparing the resulting greys generated by OP’s algorithm. – James May 19 '19 at 13:26
  • @James The OP was asking how to programmatically convert *colors* to luminance to determine contrast. The important part is correctly converting a color to Y, and then choosing a contrast math method, as I detailed in my answer. – Myndex May 20 '19 at 01:39
  • I know you believe that because you have answered that very question. Maybe OP can clarify. – James May 20 '19 at 02:17
  • @bob Just to add a reference for what I was saying regarding "number of shades", here's Poynton who (as always) explains things better than I do: http://poynton.ca/notes/Timo/Weber_and_contrast_ratio.html – Myndex May 23 '19 at 00:53
  • Thank you @2x2p for accepting my answer! :) – Myndex May 31 '19 at 03:49

5 Answers5

8

FOLLOW-UP ANSWER

I'm posting this as a followup answer to not only clarify my initial answer (which I also just edited), but also to add code snippets of the various concepts. Each step in the R´G´B´to Y process is important, and also must be in the order described or the results will fail.

DEFINITIONS:

sRGB: sRGB is a tristimulus color model which is the standard for the Web, and used on most computer monitors. It uses the same primaries and white point as Rec709, the standard for HDTV. sRGB differs from Rec709 only in the transfer curve, often referred to as gamma.

Gamma: This is a curve used with various methods of image coding for storage and transmission. It is often similar to the perception curve of human vision. In digital, gamma's effect is to give more weight to the darker areas of an image such that they are defined by more bits in order to avoid artifacts such as "banding".

Luminance: (notated L or Y): a linear measure or representation of light (i.e. NO gamma curve). As a measure it is usually cd/m2. As a representation, it's Y as in CIEXYZ, and commonly 0 (black) to 100 (white). Luminance features spectral weighting, based on human perception of different wavelengths of light. However, luminance is linear in terms of lightness/darkness - that is if 100 photons of light measures 10, then 20 would be 200 photons of light.

L* (aka Lstar): Perceptual Lightness, as defined by CIELAB (L*a*b*) Where luminance is linear in terms of the quantity of light, L* is based on perception, and so is nonlinear in terms of light quantities, with a curve intended to match the human eye's photopic vision (approx. gamma is ^0.43).

Luminance vs L:* 0 and 100 are the same in both luminance (written Y or L) and Lightness (written L*), but in the middle they are very different. What we identify as middle grey is in the very middle of L* at 50, but that relates to 18.4 in Luminance (Y). In sRGB that's #777777 or 46.7%.

Contrast: The term for defining a difference between two L or two Y values. There are multiple methods and standards for contrast. One common method is Weber contrast, which is ΔL/L. Contrast is usually stated as a ratio (3:1) or a percentage (70%).

DERIVING LUMINANCE (Y) FROM sRGB

STEP ZERO (un - HEX)

If needed, convert a HEX color value to a triplet of integer values where #00 = 0 and #FF = 255.

STEP ONE (8 bit to decimal)

Convert 8 bit sRGB values to decimal by dividing by 255:

   decimal = R´8bit / 255       G´decimal = G´8bit / 255       B´decimal = B´8bit / 255

If your sRGB values are 16 bit then convert to decimal by dividing by 65535.

STEP TWO (Linearize, Simple Version)

Raise each color channel to the power of 2.2, the same as an sRGB display. This is fine for most applications. But if you need to make multiple ound trips into and out of sRGB gamma encoded space, then use the more accurate versions below.

   R´^2.2 = Rlin    G´^2.2 = Glin    B´^2.2 = Blin

STEP TWO (Linearize, Accurate Version)

Use this version instead of the simple ^2.2 version above if you are doing image manipulations and multiple round trips in and out of gamma encoded space.

function sRGBtoLin(colorChannel) {
        // Send this function a decimal sRGB gamma encoded color value
        // between 0.0 and 1.0, and it returns a linearized value.

    if ( colorChannel <= 0.04045 ) {
            return colorChannel / 12.92;
        } else {
            return Math.pow((( colorChannel + 0.055)/1.055),2.4));
        }
    }

EDIT TO ADD CLARIFICATION: the sRGB linearization I cited above above uses the correct threshold from the official IEC standard, while the old WCAG2 math uses an incorrect threshold (a known, open bug). Nevertheless, the threshold difference does not affect the WCAG 2 results, which are instead plagued by other factors.

STEP THREE (Spectrally Weighted Luminance)

The normal human eye has three types of cones that are sensitive to red, green, and blue light. But our spectral sensitivity is not uniform, as we are most sensitive to green (555 nm), and blue is a distant last place. Luminance is spectrally weighted to reflect this using the following coefficients:

   Rlin * 0.2126 + Glin * 0.7152 + Blin * 0.0722 = Y = L

Multiply each linearized color channel by their coefficient and sum them all together to find L, Luminance.

STEP FOUR (Contrast Determination)

There are many different means to determine contrast, and various standards as well. Some equations work better than others depending on the specific application.

WCAG 2.x
The current web page contrast guideline listed in the WCAG 2.0 and 2.1 is simple contrast with an offset:

   C = ((Llighter + 0.05) / (Ldarker + 0.05)) : 1

This gives a ratio, and the WCAG specifies 3:1 for non-text, and 4.5:1 for text to meet the "AA" level.

However, it is a weak example for a variety of reasons. I'm on record as pointing out the flaws in a current GitHub issue (WCAG #695) and have been researching alternatives.

EDIT TO ADD (Jan 2021):

The replacement to the old WCAG 2 contrast is the APCA:

"Advanced Perceptual Contrast Algorithm"

A part of the new WCAG 3. It is a substantial leap forward. While stable I still consider it beta, and because it is a bit more complicated, probably better to link to the SAPC/APCA GitHub repo for the time being.

Some other previously developed contrast methods in the literature:

Modified Weber
The Hwang/Peli Modified Weber provides a better assessment of contrast as it applies to computer monitors / sRGB.

   C = (Llighter – Ldarker) / (Llighter + 0.1)

Note that I chose the flare factor of 0.1 instead of 0.05 based on some recent experiments. That value is TBD though, and a different value might be better.

LAB Difference
Another alternative that I happen to like more than others is converting the linearized luminance (L) to L* which is Perceptual Lightness, then just subtracting one from the other to find the difference.

Convert Y to L*:

function YtoLstar(Y) {
        // Send this function a luminance value between 0.0 and 1.0,
        // and it returns L* - perceptual lightness

    if ( Y <= (216/24389) {       // The CIE standard states 0.008856 but 216/24389 is the intent for 0.008856451679036
            return Y * (24389/27);  // The CIE standard states 903.3, but 24389/27 is the intent, making 903.296296296296296
        } else {
            return Math.pow(Y,(1/3)) * 116 - 16;
        }
    }

Once you've converted L to L*, then a useful contrast figure is simply:

    C = Llighter – Ldarker**

The results here may need to be scaled to be similar to other methods. A scaling of about 1.6 or 1.7 seems to work well.

There are a number of other methods for determining contrast, but these are the most common. Some applications though will do better with other contrast methods. Some others are Michaelson Contrast, Perceptual Contrast Length (PCL), and Bowman/Sapolinski.

ALSO, if you are looking for color differences beyond the luminance or lightness differences, then CIELAB has some useful methods in this regard.

SIDE NOTES:

Averaging RGB No Bueno!

OP 2x2p mentioned a commonly cited equation for making a greyscale of a color as:

    GRAY = round((R + G + B) / 3);

He pointed out how inaccurate it seemed, and indeed — it is completely wrong. The spectral weighting of R, G, and B is substantial and cannot be overlooked. GREEN is a higher luminance than BLUE by an ORDER OF MAGNITUDE. You cannot just sum all three channels together and divide by three and get anything close to the actual luminance of a particular color.

I believe the confusion over this may have come from a color control known as HSI (Hue, Saturation, Intensity). But this control is not (and never intended to be) perceptually uniform!!! HSI, like HSV, are just "conveniences" for manipulating color values in a computer. Neither are perceptually uniform, and the math they use is strictly for supporting an "easy" way to adjust color values in software.

OP's Sample Colors

2x2p posted his code using '#318261','#9d5fb0' as test colors. Here's how they look on my spreadsheet, along with each value in every step along the process of conversion (using the "accurate" sRGB method):

enter image description here

Both are close to middle grey of #777777. Notice also that while the luminance L is just 18, the perceptual lightness L* is 50.

Myndex
  • 3,952
  • 1
  • 9
  • 24
  • The whole conversation inspired me to turn it into a single function that returns all possible values for further processing using your accurate gamma. To the output I will soon add inverted RGB, the nearest neighbour in 256 web colours. But before this I still haven't figured out, if one would like to have grayscale, How the L* , Y or LIN can produce #777777 from #9d5fb0 like Adobe does? – 2x2p May 31 '19 at 14:37
  • Hi @2x2p okay in the color graphic at the end of the post above, **Y** is the SUM of RGB weighted luminances, i.e Y=Rlum+Glum+Blum — once you have Y, then the grey is very simply Rlin=Y, Glin=Y, Blin=Y — Notice the Rlin Glin Blin values for the #777777 patch in the graphic. This is BECAUSE the weighted linear RGB luminance coefficients sum to 1.0. Or put another way, just take the value for Y, add back the gamma, and then use that for each color and you have grey. For 777, it's Y=0.184, apply sRGB gamma makes it 0.467, 8bit is 0.467*255=119 which is hex 77, use that for each channel. – Myndex Jun 01 '19 at 08:17
  • Do you mean with "take the value for Y, add back the gamma" actually reversing the gamma that was applied through "Math.pow((( channel + 0.055)/1.055),2.4);"? because what ever I have tried I never get 0.467 from 0.184. – 2x2p Jun 05 '19 at 08:25
  • Correct. 0.184 Y = 0.467 sRGB = 50 L* – Myndex Jun 06 '19 at 03:16
  • I kept getting 0.451 not 0.467. But never mind, found out now javascript is weird for syntax on this, it should just look like: `(1.055*Math.pow(Ylin,(1/2.4))-0.055)` – 2x2p Jun 06 '19 at 12:25
  • (1) WCAG uses a different constant for gamma decoding than the sRGB standard. So your formula is not WCAG standard. – Ryan Jan 26 '21 at 22:54
  • (2) Gamma decoding should be clamped before and after, but that's often left out of definitions (possibly including WCAG) – Ryan Jan 26 '21 at 22:55
  • @Ryan The WCAG 2 guideline math is wrong Ryan and this is well known, as it cited an obsolete draft. I am citing the correct IEC STANDARD. Regardless, the different threshold has no effect on the WCAG2 contrast, which is being replaced for WCAG3, and I am not going to post the incorrect math here. Also, when stating that the inputs are 0 to 255 defines that they are clamped. **THANKS FOR THE SPURIOUS DOWNVOTE**. *FOR THE RECORD*: I am a W3 and AGWG member, and the inventor of APCA, the new contrast method for WCAG 3. – Myndex Jan 27 '21 at 21:14
  • 1
    @Myndex I'm well aware of that, its deficiencies, and your contribution to a better method. Didn't notice it was you here. Thanks for editing the answer to flag. When stating 0...255, I don't think many people realize to clamp. In fact it's in the literature as a common mistake. Flagging it helps stop disinformation. I think a lot of devs go into this not realizing how technical it is, how much bad info there is, and get lost in words. Hopefully you have a better day. – Ryan Jan 27 '21 at 23:57
  • @Ryan That's an interesting point Ryan. As I started coding circa 1975 with line numbered Fortran IV saving programs to paper-punch-hole-tape, I may be assuming too much regarding things that seem obvious to me, like 0-255 literally means it shall be clamped to 0-255. At the same time I hate ever talking "down" or being overly pedantic, an ever present problem if you look at the excess length of my posts, articles, and white papers... – Myndex Jan 28 '21 at 23:25
  • 1
    @Myndex You care more than others, so yeah you'd see the meaning! Others maybe in haste/overload miss "I need to clamp this coming in and out to be valid". Even some pros fall for that issue... or even earlier steps. A live example you might know: Mozilla recently implemented Machado et al. Nice! For achromatopsia, there was an attempt to use XYZ Y luminance. Instead, it gets NTSC luma... from sRGB. I was going to issue a PR, but I don't know what a peer-review accepted monochromat simulation is. Chromium also adopted Machado, but uses the correct sRGB transform... but I think again w/o gamma. – Ryan Jan 29 '21 at 00:12
  • @Ryan For -opia types, Machado is equiv. to Brettel. I'm using Brettel for my CVD sim at myndex.com/CVD/ It includes a blue cone monochromat sim, as I recall I derived from Brettel math & LMS space *(I should revisit)*. It models the color loss but not the acuity loss of BCM, which is usually in the low-vision range of 20/70 or worse. As for peer review: BCM is so rare, there are limited studies at all. BCM essentially requires assistive technology do to the lack of fovea and extremely poor acuity. Whereas deutan/protan can definitely be helped through proper design and luminance contrast. – Myndex Jan 31 '21 at 18:55
  • 1
    Thanks for this! Esp. the warning re: the outdated RGB linearization values. I've updated a program of mine with your recommended replacement values and the visual results are immediately more useful / correct. I've noticed the most drastic improvement in how deeply saturated reds are treated – diopside Apr 22 '23 at 01:05
6

LUMINANCE CONTRAST and PERCEPTION

What you are looking for is how to assess Luminance Contrast.

You are definitely on the right track — 6% of males have color blindness and they rely on luminance contrast and not color contrast. I have a chart here that demonstrates that very issue.

And just FYI the term is "luminance" not luminosity. Luminosity refers to light emitted over time, often used in astronomy. When we are talking about colorimetry, we use the term luminance, a different measure of light and defined by CIEXYZ (CIE 1931).

As it happens I am in the midst of researching contrast assessment methods to provide some new and more accurate standards. You can follow some of the progress on GitHub, and on my perception research page.

It is not as straight forward as one might think, as there are a number of factors that affect human perception of contrast. There is a lot of discussion in the GitHub thread on this zt the moment.

DETERMINING LUMINANCE

Luminance is a spectrally weighted but otherwise linear measure of light. The spectral weighting is based on how human trichromatic vision perceives different wavelengths of light. This was part of the measurements in the CIE 1931 experiments and resultant colorspaces such as CIEXYZ (Luminance is the Y in XYZ).

While XYZ is a linear model of light, human perceptions is very much non-linear. As such, XYZ is not perceptually uniform. Nevertheless, for your purposes you just want to know what the equivalent luminance is for a color vs a grey patch.

Assuming you are starting with sRGB video (i.e. the web and computer standard colorspace) you first need to remove the gamma encoding, and then apply the spectral weighting.

I've made a lot of posts here on Stack regarding gamma, but if you want a definitive explaination I recommend Poynton's Gamma FAQ.

Converting sRGB to linear (gamma 1.0).

1) Convert the R´G´B´ values from 8 bit integer (0-255) to decimal (0.0 - 1.0) by dividing each channel individually by 255. The R´G´B´ values must be 0 to 1 for the following math to work. Also, here's a link to a post with a code snippet for converting a single number (like a 6 digit hex) into RGB channels.

2) Linearize each channel. The lazy way is to apply a power curve of 2.2, which is how a computer monitor displays the image data — for the purposes of judging the luminance of a color this is fine:

R´^2.2 = Rlin G´^2.2 = Glin B´^2.2 = Blin

3) An ALTERNATE (more accurate) method: If you are doing image processing and going back and forth from sRGB to linear, then there is a more accurate method, which is on wikipedia. But also, here's a code snippet from my spreadsheet which I use for a similar purpose:

  =IF( A1 <= 0.04045 ; A1 / 12.92 ; POWER((( A1 + 0.055)/1.055) ; 2.4))

What this shows is for values under 0.04045 you just divide by 12.92, but for values above, you offset and apply a power of 2.4 — note that in the "lazy way" we used 2.2, but the curves are nearly identical due to the offset/linearization.

Do either step 2 OR step 3 but not both.

4) Finally, apply the coefficients for spectral weighting, and sum the three channels together:

Rlin * 0.2126 + Glin * 0.7152 + Blin * 0.0722 = Y

And that gives you Y, your luminance for a given color. Luminance also known as L but not to be confused with L* (Lstar) which is perceptual lightness, not luminance).

Determining Perceived Contrast

Now,if you want to determine the difference between two samples, there are a number of methods. Weber Contrast is essentially ΔL/L and has been the standard since the 19th century. But for computer monitor displayed stimuli, I suggest some more modern approaches. For instance the following modification for better perceptual results:

(Llighter – Ldarker) / (Llighter + 0.1)

There is also "Perceptual Contrast Length," Bowman-Sapolinski, and others including some I am working on. You can also convert to CIELAB (L*a*b*) which is based on human perception, and there you just subtract L*1 from L*2.

Also, there are a number of other factors that affect contrast perception such as font size and weight, padding (See Bartleson–Breneman Surround Effects) and other factors.

Please let me know if you have any questions.

Myndex
  • 3,952
  • 1
  • 9
  • 24
  • You should also emphasize, as you have done in [this issue on the Web Content Accessibility Guidelines issues page](https://github.com/w3c/wcag/issues/695), for example, that "color contrast" is only one part of the readability equation and other factors such as font size and font weight also come into play. – Peter O. May 18 '19 at 16:49
  • @PeterO. Indeed, ***perceived contrast*** is an interconnected set of factors that includes (and is not limited to) font size, font weight, surrounding DIV padding, ambient light, screen reflectance, negative or positive mode (i.e. white on black) all in additional to luminance contrast. Not to mention variations due to eye age and visual impairments. But yea, I'm trying to find the "right balance" on StackExchange between answering a question and writing a book, LOL... 8-) That said, the contrast between two colors is not enough to to be definitive in practical use. Maybe I'll edit the post. – Myndex May 19 '19 at 00:12
  • wow, nice elaborate answer from Myndex. First of all I corrected the word luminance in my question. Now I will try what function I can make based on this. I know font size and weight plus environmental conditions also play a role. But I am currently focused on a background color vs foreground color vs font color. For example testing if a menu color extended on top of a background color will fail in grayscale. There are other ways to ensure a better text contrast like giving text an almost invisible tiny shadow as it's often used in video subtitles. But that's a different topic. For now thanx! – 2x2p May 19 '19 at 08:21
  • You're welcome @2x2p — Interesting that you mentioned the triad of BG/FG/Font, as that is part of my research focus. There is some research on this, see ***Bartleson–Breneman Surround Effects*** and it's one thing that's not in the standards, namely padding around text with a DIV text container is a substantially different color than the overall BG. It is one of a few things I hope to have better answers for this year. – Myndex May 19 '19 at 12:42
  • @Myndex I have tried to convert your answer into `JS` today and wonder if my assumption at the end is correct? – 2x2p May 20 '19 at 09:18
0

Maybe this is something that can help. (Pulled from yea dusty olde js crypt).

I believe this was initially developed to mathematically determine if text color on color background is actually readable.

Color Contrast

Defined by (WCAG Version 2)

http://www.w3.org/TR/2008/REC-WCAG20-20081211

Contrast ratios can range from 1 to 21

section 1.4.3

  • Highly Visible: (enhanced) Minimum contrast ratio of 7 to 1 -- 7:1
  • Normal Text: Minimum contrast ratio of 4.5 to 1 -- 4.5:1
  • Large Text: Minimum contrast ratio of 3 to 1 -- 3:1

This contrastRatio function spits out a number between 1 and 21, which serves as the first number in the ratio.

e.g. n:1 where "n" is the result of this method

The higher the number, the more readable it is.

function getLum(rgb) {

    var i, x;
    var a = []; // so we don't mutate
    for (i = 0; i < rgb.length; i++) {
        x = rgb[i] / 255;
        a[i] = x <= 0.03928 ? x / 12.92 : Math.pow((x + 0.055) / 1.055, 2.4);
    }
    return 0.2126 * a[0] + 0.7152 * a[1] + 0.0722 * a[2];

}


var RE_HEX_RGB = /[a-f0-9]{6}|[a-f0-9]{3}/i;

function HEX_RGB(str) {
    var match = str.toString(16).match(RE_HEX_RGB);
    if (!match) {
        return [0, 0, 0];
    }

    var colorString = match[0];

    // Expand 3 character shorthand triplet e.g. #FFF -> #FFFFFF
    if (match[0].length === 3) {
        var Astr = colorString.split('');
        for (var i = 0; i < Astr.length; i++) {
            var ch = Astr[i];
            Astr[i] = ch + ch;
        }
        colorString = Astr.join('');
    }

    var integer = parseInt(colorString, 16);

    return [
        (integer >> 16) & 0xFF,
        (integer >> 8) & 0xFF,
        integer & 0xFF
    ];
};


function contrastRatio(rgb1, rgb2) {
    var l1 = getLum(rgb1);
    var l2 = getLum(rgb2);
    return (Math.max(l1, l2) + 0.05) / (Math.min(l1, l2) + 0.05);
}


var c1 = '#9d5fb0';
var c2 = '#318261';

var cr = contrastRatio( HEX_RGB(c1), HEX_RGB(c2) );
console.log("cr", cr);
bob
  • 7,539
  • 2
  • 46
  • 42
  • 1
    The WCAG math is wrong on a number of levels. There are open issues on Github that details this. First, the correct sRGB threshold is 0.04045 not 0.03928. But the other problem is the contrast equation fails to address human perception. Of course, if all you are concerned with is "matching WCAG standards" then use their math, but FYI a replacement equation is being developed to address these problems. – Myndex May 20 '19 at 01:45
0

This is my updated code based on what Myndex wrote before.

For the test example purple I use hex #9d5fb0 (stands for R:157, G:95, B:176) and for green I use hex #318261 (stands for R:49, G:130, B:97)

JS:

    function HexToRGB(hex) {
      // to allow shorthand input like #FFF or FFFFFF without # sign make it #FFFFFF
      hex = String(hex);
      if(hex.length==3){hex='#'+hex.substr(0, 1)+hex.substr(0, 1)+hex.substr(1, 1)+hex.substr(1, 1)+hex.substr(2, 1)+hex.substr(2, 1);}
      if(hex.length==4){hex='#'+hex.substr(1, 1)+hex.substr(1, 1)+hex.substr(2, 1)+hex.substr(2, 1)+hex.substr(3, 1)+hex.substr(3, 1);}
      if(hex.length==6){hex='#'+hex;}
      let R = parseInt(hex.substr(1, 2),16);
      let G = parseInt(hex.substr(3, 2),16);
      let B = parseInt(hex.substr(5, 2),16);
      console.log("rgb from "+hex+" = "+[R,G,B]);   
      return [R,G,B];
    }

The programatic average method for grayscale most common articles refer to is:

GRAY = round((R + G + B) / 3);

JS:

    function RGBToGRAY(rgb) {
      let avg = parseInt((rgb[0]+rgb[1]+rgb[2])/3);
      return [avg,avg,avg];
    }

This would turn the purple into #8f8f8f because average = 143

This would turn the green into #5c5c5c because average = 92

The difference between 92 and 143 is too large and would pass my expected test incorrectly. Adobe's simulation converts the same examples to grayscale as:

Hex #777777 standing for R:119, G:119, B:119

Hex #747474 standing for R:116, G:116, B:116

The difference between 116 and 119 is obviously small and should fail my expected difference test. So the RGBToGRAY method is hereby proven inaccurate.

Now as explained by Myndex we should make it linear and apply the gamma 2.2 correction.

R´^2.2 = Rlin G´^2.2 = Glin B´^2.2 = Blin

JS:

    function linearFromRGB(rgb) {
      // make it decimal
      let R = rgb[0]/255.0; // red channel decimal
      let G = rgb[1]/255.0; // green channel decimal
      let B = rgb[2]/255.0; // blue channel decimal
      // apply gamma
      let gamma = 2.2;
      R = Math.pow(R, gamma); // linearize red
      G = Math.pow(G, gamma); // linearize green
      B = Math.pow(B, gamma); // linearize blue
      let linear = [R,G,B];
      console.log('linearized rgb = '+linear);  
      return linear;
    }

Gamma corrected linear result for purple is now R:0.3440, G:0.1139, B:0.4423 and the result for green is R:0.0265, G:0.2271, B:0.1192

Now getting lightness L or (Y in XYZ scale) by applying the coefficients would be this:

Y = Rlin * 0.2126 + Glin * 0.7152 + Blin * 0.0722

JS

    function luminanceFromLin(rgblin) {
      let Y = (0.2126 * (rgblin[0])); // red channel
      Y = Y + (0.7152 * (rgblin[1])); // green channel
      Y = Y + (0.0722 * (rgblin[2])); // blue channel
      console.log('luminance from linear = '+Y);       
      return Y;
    }

Now the perceived contrast between two Y (or L) values:

(Llighter – Ldarker) / (Llighter + 0.1)

JS

    function perceivedContrast(Y1,Y2){
      let C = ((Math.max(Y1,Y2)-Math.min(Y1,Y2))/(Math.max(Y1,Y2)+0.1));
      console.log('perceived contrast from '+Y1+','+Y2+' = '+C); 
      return C;      
    }

Now all above functions combined into one step in/output

    function perceivedContrastFromHex(hex1,hex2){
      let lin1 = linearFromRGB(HexToRGB(hex1));
      let lin2 = linearFromRGB(HexToRGB(hex2));
      let y1 = luminanceFromLin(lin1);
      let y2 = luminanceFromLin(lin2);
      return perceivedContrast(y1,y2);
    }

and finally the test

    var P = perceivedContrastFromHex('#318261','#9d5fb0');
    // compares the purple and green example
    alert(P);
    // shows 0.034369592139888626
    var P = perceivedContrastFromHex('#000','#fff'); 
    // compares pure black and white
    alert(P);
    // shows 0.9090909090909091
2x2p
  • 414
  • 3
  • 17
  • HI @2X2p I think you've got it! Question: are you going to be checking contrasts that are very close as in this example? If you are, then you might want to use the more accurate sRGB linearization, code snippet in my followup answer. The reason is that checking close contrast with dark colors may need a bit more precision, depending on your needs. You can see how the darker values are off by more in the spreadsheet screen shot in my followup. I'm using the "accurate" sRGB linearize in the spreadsheet. – Myndex May 22 '19 at 05:57
  • I will implement the more accurate sRGB linearisation later. For the first version this was already very helpful. There remains one question, what is the % difference that tells us two colors are different enough? – 2x2p May 22 '19 at 12:42
  • 1
    Ah HAH! That is the million dollar question isn't it? I wish I could give you an "absolute" answer but — THERE ISN'T ONE. In fact this is the primary focus of my current research (which is partly why I've been writing these lengthy answers, LOL). For the ModWeber equation I gave you, I think 60% is a reasonable minimum for normally sighted individuals. 70% is a bit more "standard" and weighted toward those with impairments. The *fact* is that threshold contrast for normal vision is a mere 1% to 2%. But that's not very readable. When using a "ratio" then 3:1 is pretty typical. Links next post. – Myndex May 22 '19 at 22:04
  • 1
    Here are links to some of my Perception Research. On the [Myndex.com perception page](https://www.myndex.com/WEB/Perception), look at experiment CE-09 for some comparisons of different math. You might also want to read my lengthy [GitHub issue regarding WCAG contrast standards](https://github.com/w3c/wcag/issues/695). [NASA has a very good set of pages on the subject of contrast and displays.](https://colorusage.arc.nasa.gov/luminance_cont.php) And finally, have you read [the gamma FAQ and color FAQ?](https://poynton.ca/GammaFAQ.html) Dr. Charles Poynton explains things better than I do. – Myndex May 23 '19 at 01:18
0

Just for a better syntax and ease of use I have put the whole theory into one single parser within an object that runs as follows.

The parser will compute these values in one step from color 318261:

A returned object will look like:

hex: "#318261"
rgb: {
  r: 49,
  g: 130,
  b: 97
}
int: 10313648
dec: {
  r: 0.19215686274509805,
  g: 0.5098039215686274,
  b: 0.3803921568627451
}
lin: {
  r: 0.030713443732993635,
  g: 0.2232279573168085,
  b: 0.11953842798834562
}
y: 0.17481298771137443
lstar: 48.86083783595441

JavaScript can call the object internal parser with a Hex color string as a parameter. The hex string can look like either 000 or #000 or 000000 or #000000. There are two ways to process the result.

A: take the returned object as a whole into a variable:

var result = Color_Parser.parseHex('318261');
var lstar = result.lstar;

B: parse once and thereafter access parts of the last parser result. For example pick only the the L* contrast value would be just:

Color_Parser.parseHex('#ABC');
var lstar = Color_Parser.result.lstar;

Here is the full code:

const Color_Parser = {
  version: '1.0.0.beta',
  name: 'Color_Parser',
  result: null, // the parser output
  loging: true, // set to false to disable writing each step to console log
  parseHex: function(_input) {
    if (this.loging) {
      console.log(this.name + ', input: ' + _input);
    }
    this.result = {};
    // pre flight checks
    if (!_input) {
      this.result.error = true;
      console.log(this.name + ', error');
      return this.result;
    }
    // first convert shorthand Hex strings to full strings
    this.result.hex = String(_input);
    if (this.result.hex.length == 3) {
      this.result.hex = '#' + this.result.hex.substr(0, 1) + this.result.hex.substr(0, 1) + this.result.hex.substr(1, 1) + this.result.hex.substr(1, 1) + this.result.hex.substr(2, 1) + this.result.hex.substr(2, 1);
    }
    if (this.result.hex.length == 4) {
      this.result.hex = '#' + this.result.hex.substr(1, 1) + this.result.hex.substr(1, 1) + this.result.hex.substr(2, 1) + this.result.hex.substr(2, 1) + this.result.hex.substr(3, 1) + this.result.hex.substr(3, 1);
    }
    if (this.result.hex.length == 6) {
      this.result.hex = '#' + this.result.hex;
    }
    if (this.loging) {
      console.log(this.name + ', added to result: ' + this.result.hex);
    }
    // second get int values from the string segments as channels
    this.result.rgb = {
      r: null,
      g: null,
      b: null
    };
    this.result.rgb.r = parseInt(this.result.hex.substr(1, 2), 16);
    this.result.rgb.g = parseInt(this.result.hex.substr(3, 2), 16);
    this.result.rgb.b = parseInt(this.result.hex.substr(5, 2), 16);
    if (this.loging) {
      console.log(this.name + ', added to result: ' + this.result.rgb);
    }
    // third get the combined color int value
    this.result.int = ((this.result.rgb.r & 0x0ff) << 16) | ((this.result.rgb.g & 0x0ff) << 8) | (this.result.rgb.b & 0x0ff);
    if (this.loging) {
      console.log(this.name + ', added to result: ' + this.result.int);
    }
    // fourth turn 8 bit channels to decimal
    this.result.dec = {
      r: null,
      g: null,
      b: null
    };
    this.result.dec.r = this.result.rgb.r / 255.0; // red channel to decimal
    this.result.dec.g = this.result.rgb.g / 255.0; // green channel to decimal
    this.result.dec.b = this.result.rgb.b / 255.0; // blue channel to decimal
    if (this.loging) {
      console.log(this.name + ', added to result: ' + this.result.dec);
    }
    // fifth linearize each channel
    this.result.lin = {
      r: null,
      g: null,
      b: null
    };
    for (var i = 0, len = 3; i < len; i++) {
      if (this.result.dec[['r', 'g', 'b'][i]] <= 0.04045) {
        this.result.lin[['r', 'g', 'b'][i]] = this.result.dec[['r', 'g', 'b'][i]] / 12.92;
      } else {
        this.result.lin[['r', 'g', 'b'][i]] = Math.pow(((this.result.dec[['r', 'g', 'b'][i]] + 0.055) / 1.055), 2.4);
      }
    }
    if (this.loging) {
      console.log(this.name + ', added to result: ' + this.result.lin);
    }
    // get Y from linear result
    this.result.y = (0.2126 * (this.result.lin.r)); // red channel
    this.result.y += (0.7152 * (this.result.lin.g)); // green channel
    this.result.y += (0.0722 * (this.result.lin.b)); // blue channel
    if (this.loging) {
      console.log(this.name + ', added to result: ' + this.result.y);
    }
    // get L* contrast from Y 
    if (this.result.y <= (216 / 24389)) {
      this.result.lstar = this.result.y * (24389 / 27);
    } else {
      this.result.lstar = Math.pow(this.result.y, (1 / 3)) * 116 - 16;
    }
    if (this.loging) {
      console.log(this.name + ', added to result: ' + this.result.lstar);
    }
    // compute grayscale is to be continued hereafter
    // compute inverted rgb color
    this.result.invert = {
      r: null,
      g: null,
      b: null,
      hex: null
    };
    this.result.invert.r = (255 - this.result.rgb.r);
    this.result.invert.g = (255 - this.result.rgb.g);
    this.result.invert.b = (255 - this.result.rgb.b);
    // reverse compute hex from inverted rgb          
    this.result.invert.hex = this.result.invert.b.toString(16); // begin with blue channel
    if (this.result.invert.hex.length < 2) {
      this.result.invert.hex = '0' + this.result.invert.hex;
    }
    this.result.invert.hex = this.result.invert.g.toString(16) + this.result.invert.hex;
    if (this.result.invert.hex.length < 4) {
      this.result.invert.hex = '0' + this.result.invert.hex;
    }
    this.result.invert.hex = this.result.invert.r.toString(16) + this.result.invert.hex;
    if (this.result.invert.hex.length < 6) {
      this.result.invert.hex = '0' + this.result.invert.hex;
    }
    this.result.invert.hex = '#' + this.result.invert.hex;
    this.result.error = false;
    if (this.loging) {
      console.log(this.name + ', final output:');
    }
    if (this.loging) {
      console.log(this.result);
    }
    return this.result;
  }
}
2x2p
  • 414
  • 3
  • 17