6

Given the CSS processing specifics, most notably RTL matching and selector efficiency, how should selectors be written purely from the perspective of rendering engine performance?

This should cover general aspects and include using or avoiding pseudo-classes, pseudo-elements and relationship selectors.

Community
  • 1
  • 1
Oleg
  • 24,465
  • 8
  • 61
  • 91
  • 1
    Good question, but always remember: performance is not everything until it is the only thing left to worry about. Only start worrying about performance if you're sure you absolutely need to save as much as possible and your design allows for it. – BoltClock Jun 27 '12 at 03:32
  • 2
    The "CSS Selector Profiler" in Chrome's Developer Tools is useful if you want to... profile your CSS selectors. – thirtydot Jun 27 '12 at 03:47

1 Answers1

9

At runtime an HTML document is parsed into a DOM tree containing N elements with an average depth D. There is also a total of S CSS rules in the stylesheets applied.

  1. Elements' styles are applied individually meaning there is a direct relationship between N and overall complexity. Worth noting, this can be somewhat offset by browser logic such as reference caching and recycling styles from identical elements. For instance, the following list items will have the same CSS properties applied (assuming no pseudo-classes such as :nth-child are applied):

    <ul class="sample">
      <li>one</li>
      <li>two</li>
      <li>three</li>
    </ul>
    
  2. Selectors are matched right-to-left for individual rule eligibility - i.e. if the right-most key does not match a particular element, there is no need to further process the selector and it is discarded. This means that the right-most key should match as few elements as possible. Below, the p descriptor will match more elements including paragraphs outside of target container (which, of course, will not have the rule apply but will still result in more iterations of eligibility checking for that particular selector):

    .custom-container p {}
    .container .custom-paragraph {}
    
  3. Relationship selectors: descendant selector requires for up to D elements to be iterated over. For instance, successfully matching .container .content may only require one step should the elements be in a parent-child relationship, but the DOM tree will need to be traversed all the way up to html before an element can be confirmed a mismatch and the rule safely discarded. This applies to chained descendant selectors as well, with some allowances.

    On the other hand, a > child selector, an + adjacent selector or :first-child still require an additional element to be evaluated but only have an implied depth of one and will never require further tree traversal.

  4. The behavior definition of pseudo-elements such as :before and :after implies they are not part of the RTL paradigm. The logic the assumption is that there is no pseudo element per se until a rule instructs for it to be inserted before or after an element's content (which in turn requires extra DOM manipulation but there is no additional computation required to match the selector itself).

  5. I couldn't find any information on pseudo-classes such as :nth-child() or :disabled. Verifying an element state would require additional computation, but from the rule parsing perspective it would only make sense for them to be excluded from RTL processing.

Given these relationships, computational complexity O(N*D*S) should be lowered primarily by minimizing the depth of CSS selectors and addressing point 2 above. This will result in quantifiably stronger improvements when compared to minimizing the number of CSS rules or HTML elements alone^

Shallow, preferably one-level, specific selectors are processed faster. This is taken to a whole new level by Google (programmatically, not by hand!), for instance there is rarely a three-key selector and most of the rules in search results look like

#gb {}
#gbz, #gbg {}
#gbz {}
#gbg {}
#gbs {}
.gbto #gbs {}
#gbx3, #gbx4 {}
#gbx3 {}
#gbx4 {}
/*...*/

^ - while this is true from a rendering engine performance standpoint, there are always additional factors such as traffic overhead and DOM parsing etc.

Sources: 1 2 3 4 5

BoltClock
  • 700,868
  • 160
  • 1,392
  • 1,356
Oleg
  • 24,465
  • 8
  • 61
  • 91
  • 1
    In Selectors, all pseudo-elements (not just `::before` and `::after`) are subject to the same rule that they may only be applied to the subject of a selector and are only evaluated after selector matching is completed - http://www.w3.org/TR/selectors/#pseudo-elements From CSS1 to CSS3, this is always the key selector; however in CSS4 this may change. – BoltClock Jun 27 '12 at 03:48
  • 1
    RTL parsing is an implementation detail and it may vary from engine to engine, but the general concept of starting at the key selector and working backwards is agreed upon among vendors. Which simple selectors in each compound selector are evaluated first seems to be something only the source code can answer... more on that [here](http://stackoverflow.com/questions/10106345/css-selector-engine-clarification/10108700#10108700). – BoltClock Jun 27 '12 at 03:50
  • @BoltClock: good point; I'd be willing to speculate that CSS4 parent selector will be effectively a pseudo-class with a *has-children* condition - otherwise there could be a lot of redundant matching cycles. As for code-specific imlementations, they are probably better off sticking to suggested behavior - a notable example of messing it up is IE7' caching of `:first-child` references – Oleg Jun 27 '12 at 04:51