6

Could someone prove to me that the advice given here (copied below) regarding removing dom elements before altering them and then re-inserting them is ever quicker.

By prove, I would like to see some figures. Its great that they research this but I think the article is very weak without including specifics as to what the 'problem' actually is and how the solution fixes is in terms of speed (as the article title Speeding up JavaScript)

The article....

Out-of-the-flow DOM Manipulation

This pattern lets us create multiple elements and insert them into the DOM triggering a single reflow. It uses something called a DocumentFragment. We create a DocumentFragment outside of the DOM (so it is out-of-the-flow). We then create and add multiple elements to this. Finally, we move all elements in the DocumentFragment to the DOM but trigger a single reflow. The problem

Let's make a function that changes the className attribute for all anchors within an element. We could do this by simply iterating through each anchor and updating their href attributes. The problems is, this can cause a reflow for each anchor.

function updateAllAnchors(element, anchorClass) {
  var anchors = element.getElementsByTagName('a');
  for (var i = 0, length = anchors.length; i < length; i ++) {
    anchors[i].className = anchorClass;
  }
}

The solution

To solve this problem, we can remove the element from the DOM, update all anchors, and then insert the element back where it was. To help achieve this, we can write a reusable function that not only removes an element from the DOM, but also returns a function that will insert the element back into its original position.

/**
 * Remove an element and provide a function that inserts it into its original position
 * @param element {Element} The element to be temporarily removed
 * @return {Function} A function that inserts the element into its original position
 **/
function removeToInsertLater(element) {
  var parentNode = element.parentNode;
  var nextSibling = element.nextSibling;
  parentNode.removeChild(element);
  return function() {
    if (nextSibling) {
      parentNode.insertBefore(element, nextSibling);
    } else {
      parentNode.appendChild(element);
    }
  };
}

Now we can use this function to update the anchors within an element that is out-of-the-flow, and only trigger a reflow when we remove the element and when we insert the element.

function updateAllAnchors(element, anchorClass) {
  var insertFunction = removeToInsertLater(element);
  var anchors = element.getElementsByTagName('a');
  for (var i = 0, length = anchors.length; i < length; i ++) {
    anchors[i].className = anchorClass;
  }
  insertFunction();
}
redsquare
  • 78,161
  • 20
  • 151
  • 159

5 Answers5

5

You will find it hard to get meaningful figures for this from javascript profiling as you are really on saving repaints and re-flows which won't show up in most profiling tools. You can use the Firebug paint events extension to show you visually how many repaints you're saving.

Alex
  • 34,776
  • 10
  • 53
  • 68
  • Nice extension however both examples seem to do the same amount of redraws for me. Any chance you can give a demo at pastebin.me:) – redsquare Dec 04 '09 at 02:39
  • I will admit that the extension slows things down a lot so it can sometimes be hard to see the difference on computationally heavy operations e.g. Google Maps zooming. – Alex Dec 04 '09 at 02:53
2

I put some links on a page and tested the method in the article compared to setting the class name with the elements still in the page. I tried this in Firefox 3, IE 8 and Chrome 3.

I made classes for the links that had different color and different font size. As the link text had different size for the different classes, I was sure that the page really had to be reflowed.

For any reasonable number of links (up to a few thousand), removing and adding the elements is slightly slower.

For an extremely large number of links (10 000), removing and adding the elements is slightly faster.

However, the difference is quite small. You have to have several thousand links to be able to notice any difference at all, and at 10 000 links there is still only something like a 20% difference.

So, what I have found is that you can't expect any dramatic change from this method. If you have performance problems, there is probably other methods that give a much better result. I would for example try changing the class name of the parent element instead of all child elements, and let CSS do the work. Tests I have done before showed that this can be about ten times faster.

Guffa
  • 687,336
  • 108
  • 737
  • 1,005
2

This is more or less the same as using documentFragments to initialize elements rather than the dom. The document fragments end up being faster because they have way less structure and actual rendering to worry about.

Here are some of John Resig's notes on the performance benefits of document fragments (which are in use in jquery currently):

http://ejohn.org/blog/dom-documentfragments/

Alex Sexton
  • 10,401
  • 2
  • 29
  • 41
1

The short answer is that changes to the DOM of the actual page trigger Javascript events, CSS evaluations, propagation of the changes affects on the interpretation of the rest of the DOM around it, etc. Disconnected nodes in flux have nothing like that connected to them, and manipulations on them are much cheaper.

An analogy: If you were an animator working on Toy Story 4, and in a near-final render you saw a change you needed to make in the fabric physics of a scene, would you make your changes while doing full-detail re-renders to inspect the scene, or turn off textures and colors and lower the resolution while you make those changes?

ironfroggy
  • 7,991
  • 7
  • 33
  • 44
  • I want figures. Where are the benefits. Theory is great. Just like Iraq had WOMD, look what happened. I still dont see how it makes the js quicker. I have profiled it and can see no benefit. Just wondered what I was missing. – redsquare Dec 04 '09 at 01:41
  • @redsquare: The title of that article is misleading, and in some ways inaccurate. It doesn't necessarily make the JS quicker... it makes it quicker for the browser to render the DOM. In other words, the JS could run just as fast (or even slower), but the page would appear faster, because the browser has to spend less time reflowing and repainting. Reflowing and repainting is the way the browser renders HTML and CSS. JS performance is not the same as browser rendering performance. As an example, tables are expensive to render, even when your site has no JS. – Pauan Dec 31 '10 at 01:19
  • @redsquare: I think the reason they called it "speeding up JavaScript" is because JS is *usually* the thing that triggers reflows and repaints. In other words, by changing your JS code, you can make the page appear faster. But it's not appearing faster because the JS is faster, it's appearing faster because the browser is reflowing and repainting less. For instance, a Java applet that manipulated the DOM would likely be just as bad, so it is inaccurate to call it "JavaScript speed". It's really rendering speed. – Pauan Dec 31 '10 at 01:21
  • @redsquare: P.S. I'm fairly sure that profiling the JS code will show little to no difference, because you're not supposed to measure the JS code. You're supposed to measure the time it takes the browser to render, and that is harder to profile. – Pauan Dec 31 '10 at 01:25
0

There is no reliable way to tell when a reflow has finished, other than looking at the page.

The time spent calculating is virtually the same for adding or changing elements all at once or one by one. The script doesn't wait for the browser between elements, the browser catches up.

Sometimes you want to see something right away, even if it takes longer to render the whole thing- other times you want to wait a little longer and show it all when it is ready.

If you can't tell which is better, get a designer to look at the one-by-one and all-at-once versions- but don't ask two designers!

kennebec
  • 102,654
  • 32
  • 106
  • 127