Important:
You seem to advocate a stripped, statically typed compilable version of JS. The first thing that shows is that you have no clue as to what JS is: a multi-paradigm programming language, that supports the Prototype-based OO, imperative and functional programming paradigms. The key being the functional paradigm. Apart from Haskell, which can be sort-of strong typed after you've defined your own infix operators, a functional language can't be statically-typed AFAIK. Imagine C-like function definitions that return closures:
function a = (function (Object g)
{
char[] closureChar = g.location.href;
Object foo = {};
Function foo.bar = char* function()
{//This is a right mess
return &closureChar;
};
}(this));
A function is a first class object, too. Using tons of lamda-functions that return objects, that reference functions that might return itself, other functions, objects or primitives... How on earth are you going to write all that? Js functions are as much a way of creating scopes, structuring your code, controlling the flow of your program as they are things that you assign to variables.
The problem with compiling JS ahead of time is quite simple: you compile code, that will have to run on such a vast array of different platforms: Desktops/laptops running Windows, OSX, linux, UNIX as well as tablets and smartphones with their different mobile browsers...
Even if you did manage to write & compile JS, that runs on all platforms, the speed of JS still is limited to it being single threaded, and running on a JS engine (like Java runs on a VM).
Compiling the code client side is already being done. True, it takes some time, but not an awful lot. It's quite resource intensive, so most modern browsers will cache the code in such a way that a lot of the preprocessing has been done already. The things that will always be possible to compile, will be cached in their compiled state, too. V8 is an open source, fast, JS engine. If you want, you can check the source on how it's determined what aspects of JS code are compiled, and which aren't.
Even so, that's only how V8 works... The JS engines have more to do with how fast your code runs: Some are very fast, others aren't. Some are faster at one thing, where others outperform all competition on another area. More details can be read here
Stripping the DOM part, isn't stripping anything from the language. The DOM API, isn't part of JS itself. JS is a very expressive, but in the core, small language, just like C. Both haven't got IO capabilities left to their own devices, nor can they parse a DOM. For that, browser implementations of JS have access to a DOMParser object.
You suggest a minimal DOM... hey, everybody with any sense is all for a revamped DOM API. It's far from the best thing about the web. But you have to realize that the DOM and JS are separate entities. The DOM (and DOM API) are managed by W3, whereas ECMA is responsable for JS. Neither having anything to do with each other. That's why the DOM can't be "stripped" from JS: It was never a part of it to begin with.
Since you compare JS to C++: You can write C++ code that can be compiled on both windows and Linux machines, but that's not as easy as it sounds. But since you refer to C++ yourself, I think you might know about that, too.
Speaking of which, if the only real difference you see between C++ and JS is the static vs dynamic typing, you really should spend a bit more time learning about JS.
While its syntax is C-like, the language itself shares a lot more resemblances with Lisp (ie functional programming). It doesn't know of classes as such, but uses prototypes... the dynamic typing is really not that big of a deal, to be honest.
So, bottom line:
compiling JS to run on every machine will lead to something like MS's .NET framework. The philosophy behind that was: "Write once, run everywhere"... Which didn't turn out to be true at all.
Java is X-platform, but that's only because it's not compiled to native code, but runs on a virtual machine.
Lastly, the ECMAScript standard (JS being its most common implementation) is not all that good, and is the result of the joint effort of all big competitors in the field: Mozilla, Google, Microsoft and some irrelevant Swiss company. It's one huge compromise. Just imagine those three big names agreeing to make a compiler for JS together. Microsoft will just put forth its JScript compiler as the best, Google will have its own ideas and Mozilla will probably have 3 different compilers ready, depending on what the community wants.
Edit:
You made an edit, clarifying you're talking about client-side JS. Because you felt the need to specify that, I feel as though you're not entirely sure where JS ends, and where the browsers takes over.
JS was designed as a very portable language: it hasn't got IO capabilities, supports multiple development paradigms, and (initially) was a fully interpreted language. True, it was developed with the web in mind, but you could, and some do, use this language to query a database (MongoDB), as an alternative batch scripting language (JScript), or a server-side scripting language (backbone, node.js,...). Some use ECMAScript (the basic standard for JS) to make their own programming language (Yes, I'm talking about Flash ActionScript).
Depending on the use-case, JS will be given access to objects/API's that aren't native to the language (document
, [Object http].createServer
, [Object file].readFileSync
for DOM access, webserver capabilities, and IO respectively). Those often form the bottlenecks, not the language itself.
As I hinted ad JS was initially an interpreted language. As is the way these days, the division bell between compiled and interpreted languages has been fading for the past decade, to be honest.
C/C++ used to be strictly compiled languages, but in some cases (.NET) C++ code needn't be compiled to machine code anymore...
At the same time, scripting languages like Python, are used for so many purposes they're generally perceived as a programming language, as the term scripting language somehow implies a "lesser language".
A few years ago, with the release of PHP5, the ZendEngine2 was released, too. Since then, PHP is compiled to bytecode and runs on a virtual machine. You can cache the bytecode using APC. The bcompiler allows you to generate standalone executables from PHP code, as does Facebook's HPHPc (deprecated) used to compile PHP to C++, then to native code. Now, facebook uses HHVM, which is a custom virtual machine. Find out more here.
The same evolution can be seen in JavaScript interpreters (which are called engines nowadays). They're not your everyday parse-and-execute threads of old, as you still seem to think they are. There's a lot of wizardry going on in terms of memory management, JITCompilation (tail stack optimizing even), optimization and what have you...
All great things, but these make it rather hard to determine where the actual bottlenecks are. The way each engine optimizes differs even more than IE6 differs from IE10, so it's next to impossible to pinpoint the bottlenecks definitively. If one browser takes 10 seconds for a DOM intensive task, another might take only 1~2 seconds. If, however, the same browsers where pitted against each other to check the performance of the RegExp object, the boot might be on the other foot.
Let's not forget that, after you've written your blog-post about your findings, you'll have to check if neither of the browsers has released a new version/update that claims to speed up certain tasks.