In 2011, Mike Bostock, one of the creators of D3.js remarked on XSLT:
XSLT’s approach is elegant, but only for simple transformations: without high-level visual abstractions, nor the flexibility of imperative programming, XSLT is cumbersome for any math-heavy visualization task (e.g., interpolation, geographic projection or statistical methods).
At the time of writing this question, XSLT has gone through 2 major revisions, yet is still not widely associated with the term 'data visualization'.
Why? The top few concise and concrete reasons, please, perhaps supported with a line or two of code. I want to better understand the shortfalls central to it's lack of appeal. 2 or 3 should suffice.
If Mike's comments are still valid, please say so. Again, an example or two would be helpful.
Should you prefer a little more context for your answers:
thanks to tight, local bindings between data and presentation, D3.js has raised the expectation that quality data-driven applications can be made to respond dynamically and immediately to changes in a data set. You may like to think about XSLT ecosystem characteristics that directly impede this kind of behavior.
is XSLT-controlled synchronization of transformations across multiple SVG models, as for example in (from MusicXML and other models) music notation -> dynamic score playback -> interactive instruments or theory tools) feasible? If no, in a laser sharp sentence or two, why not?
BTW, I found some useful hints among answers to an earlier (but somewhat wooly) question, but which don't get into specifics.