2

I'm trying to understand HyperHTML and how to get the best performance out of it.

Reading about how it works under the hood, it seems to imply that strong connections are made between a template and the DOM, which I take to mean that it requires a different mindset from VirtualDOM to optimise performance.

I wrote some code to show sorting of N elements in a table using hyperHtml vs normalHtml. The normalHtml version just flushes the table and rebuilds the elements. They both seem similar performance-wise. Am I comparing apples with oranges here? How can I make the hyperHtml version of the code perform better?

Code:

const numberOfElements = 10000
const items = Array.apply(null, Array(numberOfElements)).map((el, i) => i).sort(() => .5 - Math.random())
const sortMethods = [

  () => 0,
  (a, b) => a - b,
  (a, b) => b - a

]

function hyperHtmlTest() {

  const $node = document.createElement('div')
  const $table = document.createElement('table')
  const $button = document.createElement('button')
  const tableRender = hyperHTML.bind($table)

  let sortMethodIndex = 0

  function render () {

    const sortMethod = sortMethods[sortMethodIndex++ % sortMethods.length]

    tableRender`${
      items.concat().sort(sortMethod).map(item => {
        return `<tr><td>${item}</td></tr>`
      })
    }`
  }

  $node.appendChild($button)
  $node.appendChild($table)

  $button.textContent = 'HyperHTml Sort'
  $button.onclick = render  

  return $node

}

function normalHtmlTest() {

  const $node = document.createElement('div')
  const $table = document.createElement('table')
  const $button = document.createElement('button')

  let sortMethodIndex = 0

  function render () {

    const sortMethod = sortMethods[sortMethodIndex++ % sortMethods.length]

    while($table.childNodes.length)
      $table.removeChild($table.childNodes[0])

    const frag = document.createDocumentFragment()

    items.concat().sort(sortMethod).forEach(item => {

      const tr = document.createElement('tr')
      const td = document.createElement('td')

      td.textContent = item
      tr.appendChild(td)

      frag.appendChild(tr)

    })

    $table.appendChild(frag)

  }

  $node.appendChild($button)
  $node.appendChild($table)

  $button.textContent = 'NormalHtml Sort'
  $button.onclick = render  

  return $node

}

document.body.appendChild(hyperHtmlTest())
document.body.appendChild(normalHtmlTest())

Or on CodePen

To summarise the question: Why is HyperHTML as performant as plain ol' DOM manipulation in my code example, and how could I make HyperHTML more performant specifically when re-ordering DOM Nodes?

shennan
  • 10,798
  • 5
  • 44
  • 79
  • I haven’t used `hyperHTML`, but are you supposed to be building it up using DOM nodes? I don’t see any references to `createElement` in [the documentation](https://viperhtml.js.org/hyperhtml/documentation/). It looks like [you should be using `hyper`](https://viperhtml.js.org/hyperhtml/documentation/#essentials-11). – Aankhen Sep 06 '18 at 11:32
  • @Aankhen I'm not for the `hyperHtmlTest()`. The `createElement` references there are just initial setup. All renders thereafter are using `hyperHTML` with template literals. – shennan Sep 06 '18 at 11:35
  • Oh, sorry. Shows what I know! *butts out* – Aankhen Sep 06 '18 at 11:36
  • @Aankhen No worries :-) – shennan Sep 06 '18 at 11:54

1 Answers1

2

Update

hyperHTML 2.14 introduced the usage of domdiff V2 which brings in petit-dom like performance with the cost of an extra 0.6K to the library, hopefully worth the change.

The original demo also had huge problems on Safari for some weird reason, most likely related to the fact nodes are appended directly as TR to the TABLE, instead of into the table TBODY element.

Anyway, you can now compare performance of the wired demo through this CodePen.

Please note everything said about the old snabdom diffing also is not relevant anymore.


It seems like you could have read a bit further than that, reaching the wire part, since you got the bind one.

Basically, if you use an Array of strings as interpolated value, you are doing nothing different than an innerHTML like operation, with all the regular side effects:

  • every time you trash all nodes and create them from the scratch
  • every reference to any node will be lost forever
  • the amount of GC operations is higher
  • the layout is XSS prone, hence not safe

To avoid replicating all these innerHTML gotchas and use hyperHTML properly, you need to wire items to the node these represent.

tableRender`${
  items.concat().sort(sortMethod).map(item => {
    return wire(item)`<tr><td>${item.text}</td></tr>`
  })
}`

However, since memory is a concern, wire works through WeakMap so that numbers, unless used as :ids of the wire, aren't great.

The reality is that nobody has numbers or strings as items in the real world, the are 99% of the time represented by objects, and so let the object be for the demo sake.

Array.apply(null, Array(numberOfElements)).map(
  (el, i) => new Number(i)
)

Once you have objects instead of primitives, which is beside the kind of object I've created for demo sake, a more realistic scenario, every time you'd invoke the render, or update, the rows won't be trashed and re-created each time, these will be simply re-ordered, as you can see in my CodePen fork of yours, basically summarized as such:

function hyperHtmlTest() {
  const {bind, wire} = hyperHTML;
  const render = bind(document.createElement('div'));
  let sortMethodIndex = 0;
  return update();
  function update() {
    const sortMethod = sortMethods[sortMethodIndex++ % sortMethods.length];
    return render`
    <button onclick=${update}>HyperHTml Sort</button>
    <table>
      ${items.concat().sort(sortMethod).map(
        item => wire(item)`<tr><td>${+item}</td></tr>`
      )}
    </table>`;
  }
}

About Performance

Behind hyperHTML there is an engine called domdiff which purpose is to re-organize nodes.

The algorithm used in domdiff is, on average, pretty damn fast, but there are cases where it could be slower than a browser creating the same layout all at once.

You can easily spot in my pen that when you switch from ASC to DESC or vice versa, it's 5X faster than its vanilla DOM counterpart, but when it comes to ordered list to fully random one, the domdiff does a lot of checks that the DOM counterpart wouldn't care at all, so it can be slower.

In few words, while the vanilla DOM approach is linearly fast (or slow), the algorithm one has best cases and worst cases.

An algorithm that performs well in almost every case is the one used by petit-dom, but that whole logic a part has a weight that is IMO unnecessary for real-world scenarios, but surely impressive for non real-world benchmarks.

70000 Rows without a sweat

It's not a secret these days I'm working on a hyperHTML Custom Element which aim is to handle hundreds of thousands of rows, through a table that is both sortable, and searchable.

You can see an early prototype in this page, and realize it doesn't matter how a framework performs with 10000 table rows, because a good component should never put on the DOM so many nodes 'cause no user ever would be able to see all of them at once.

As you can see in that 70K table, sorting is a flash, and so is searching and scrolling, and that is HyperHTMLElement in all its glory.

These benchmarks? Not so interesting for daily, real world, tasks.

I hope this answered all your doubts.

Andrea Giammarchi
  • 3,038
  • 15
  • 25
  • Thanks for your response, and for the work on this project. I did read the parts of the docs that you mentioned, though I didn't understand the explanations of `bind` and `wire`. For me, the docs moved on from that subject far too quickly - but perhaps I'm just slow. Thank you for the Pen. I [added some perf times to the console output](https://codepen.io/anon/pen/xaPbrm?editors=0010). As you've touched on, the initial creation of els for Hyper is slower than the standard DOM, but it evens out after that. I still don't see a significant improvement over DOM though. Am I missing something? – shennan Sep 07 '18 at 09:26
  • You have it faster when it goes from ordered ASC to ordered DESC (and vice-versa) but you won't have it faster when you go from ordered to random and from random to ordered. This is **expected** because of what I've explained already: the DOM is linearly fast (or slow) in your example but you loose nodes every single time so you cause a lot of GC work. If that exact table is what you need to do in the real life there's no library that would win there, just go vanilla DOM. – Andrea Giammarchi Sep 07 '18 at 11:18
  • P.S. you could go even faster just using `innerHTML` there ... if that's what you are after, consider that too. – Andrea Giammarchi Sep 07 '18 at 11:21
  • I hear you. And thanks again. Perhaps, as you have said, a better approach would be to only show what is being shown to the user. I was just trying to avoid using scroll events. – shennan Sep 07 '18 at 11:25
  • If you have meaningful sorts, like it usually is in every table, you'll have better performance. It's random factor that messes up results here, but that's usually not real-world scenario + users scrolling 10000 rows or you showing 10000, is it realistic? 'cause it's a lot of data to either load, or show, all at once, when they'll inevitably see only a portion of it, even with 5K monitors. TL;DR I wouldn't bother with these tests, but if you think hyperHTML doesn't perform well, feel free to try other libraries, at the end it's you that should be happy, not me :-) – Andrea Giammarchi Sep 07 '18 at 11:28
  • The randomising only happens on initiation. After that every click of the button is a genuine sort. The first sort is simply `() => 0` making all equal and corresponding to the original array. As far as the render method is concerned, it's just another sort. But perhaps your comment still stands based on Hyper's search algorithm? – shennan Sep 07 '18 at 11:35
  • domdiff has no memory of that initial state so every time you reach `=> 0` it'll be a not quite genuine sort, it's back to ordered => random, and the next click random to ordered. That's unpredictable speed in pretty much any algorithm out there and a linear solution will be the only predictable result, in terms of performance, it's always O(n) – Andrea Giammarchi Sep 07 '18 at 11:43
  • And there is no way to trick domdiff into remembering? Remember that the 'random' sort has the same output every time... The order is created at page load. – shennan Sep 07 '18 at 11:46
  • that would mean keeping 10000 objects in memory and it'll need to loop and compare all of them to understand it's the same list anyway so the short answer is no. – Andrea Giammarchi Sep 07 '18 at 11:48
  • Please read more about the complexity of permutation in this paper http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.4.6927&rep=rep1&type=pdf which is the one petit-dom used for its algorithm. In this case the best / easiest result for random to ordered or ordered to random will be close to O(n) anyway, but it's unpredictable regardless. – Andrea Giammarchi Sep 07 '18 at 11:49