30

I'm writing a Google Chrome extension. As the JavaScript files are loaded from disk, their size barely matters.

I've been using Google Closure Compiler anyway, because apparently it can make performance optimizations as well as reducing code size.

But I noticed this at the top of my output from Closure Compiler:

var i = true, m = null, r = false;

The point of this is obviously to reduce the filesize (all subsequent uses of true/null/false throughout the script can be replaced by single characters).

But surely there's a slight performance hit with that? It must be quicker to just read a literal true keyword than look up a variable by name and find its value is true...?

Is this performance hit worth worrying about? And is there anything else Google Closure Compiler does that might actually slow down execution?

callum
  • 34,206
  • 35
  • 106
  • 163
  • 1
    I doubt the performance hit is big, but the decrease in file size most definitely will outweigh it. – Rafe Kettler Nov 09 '11 at 13:47
  • I'm sure the real answer to this lies in whether v8 or any of the other js JITs do global constant propagation and folding (which I'm sure they do to an extent) – Necrolis Nov 09 '11 at 13:55
  • 21
    Using "GCC" as an abbreviation for "Google Closure Compiler" is likely to lead to confusion. http://gcc.gnu.org/ – Mike Samuel Nov 09 '11 at 14:00
  • 3
    This question is not actual anymore, because compiler now uses `!0` for `true` and `!1` for `false`. – Rok Kralj Aug 05 '12 at 12:59
  • 1
    @RokKralj, **Still valid**. Surely `!0` is slower than `true`. – Pacerier Mar 06 '15 at 09:19

3 Answers3

43

The answer is maybe.

Lets look at what the closure team says about it.

From the FAQ:

Does the compiler make any trade-off between my application's execution speed and download code size?

Yes. Any optimizing compiler makes trade-offs. Some size optimizations do introduce small speed overheads. However, the Closure Compiler's developers have been careful not to introduce significant additional runtime. Some of the compiler's optimizations even decrease runtime (see next question).

Does the compiler optimize for speed?

In most cases smaller code is faster code, since download time is usually the most important speed factor in web applications. Optimizations that reduce redundancies speed up the run time of code as well.

I flatly challenge the first assumption they've made here. The size of vars names used does not directly impact how the various JavaScript engines treat the code-- in fact, JS engines don't care if you call your variables supercalifragilisticexpialidocious or x (but I as a programmer sure do). Download time is the most important part if you're worried about delivery-- a slow running script can be caused by millions of things that I suspect the tool simply cannot account for.

To truthfully understand why your question is maybe, first thing you need to ask is "What makes JavaScript fast or slow?"

Then of course we run into the question, "What JavaScript engine are we talking about?"

We have:

  • Carakan (Opera)
  • Chakra (IE9+)
  • SpiderMonkey (Mozilla/FireFox)
  • SquirrelFish (Apple's webkit)
  • V8 (Chrome)
  • Futhark (Opera)
  • JScript (All versions of IE before 9)
  • JavaScriptCore (Konqueror, Safari)
  • I've skipped out on a few.

Does anyone here really think they all work the same? Especially JScript and V8? Heck no!

So again, when google closure compiles code, which engine is it building stuff for? Are you feeling lucky?

Okay, because we'll never cover all these bases lets try to look more generally here, at "old" vs "new" code.

Here's a quick summary for this specific part from one of the best presentations on JS Engines I've ever seen.

Older JS engines

  • Code is interpreted and compiled directly to byte code
  • No optimization: you get what you get
  • Code is hard to run fast because of the loosely typed language

New JS Engines

  • Introduce Just-In-Time(JIT) compilers for fast execution
  • Introduce type-optimizing JIT compilers for really fast code (think near C code speeds)

Key difference here being that new engines introduce JIT compilers.

In essence, JIT will optimize your code execution such that it can run faster, but if something it doesn't like happens it turns around and makes it slow again.

You can do such things by having two functions like this:

var FunctionForIntegersOnly = function(int1, int2){
    return int1 + int2;
}

var FunctionForStringsOnly = function(str1, str2){
    return str1 + str2;
}

alert(FunctionForIntegersOnly(1, 2) + FunctionForStringsOnly("a", "b"));

Running that through google closure actually simplifies the whole code down to:

alert("3ab");

And by every metric in the book that's way faster. What really happened here is it simplified my really simple example, because it does a bit of partial-execution. This is where you need to be careful however.

Lets say we have a y-combinator in our code, the compiler turns it into something like this:

(function(a) {
 return function(b) {
    return a(a)(b)
  }
})(function(a) {
  return function(b) {
    if(b > 0) {
      return console.log(b), a(a)(b - 1)
    }
  }
})(5);

Not really faster, just minified the code.

JIT would normally see that in practice your code only ever takes two string inputs to that function, and returns a string (or integer for the first function), and this put it into the type-specific JIT, which makes it really quick. Now, if google closure does something strange like transform both those functions that have nearly identical signatures into one function (for code that is non-trivial) you may lose JIT speed if the compiler does something JIT doesn't like.

So, what did we learn?

  • You might have JIT-optimized code, but the compiler re-organizes your code into something else
  • Old browsers don't have JIT but still run your code
  • Closure compiled JS invokes less function calls by doing partial-execution of your code for simple functions.

So what do you do?

  • Write small and to-the-point functions, the compiler will be able to deal with them better
  • If you have a very deep understanding of JIT, hand optimizing code, and used that knowledge then closure compiler may not be worthwhile to use.
  • If you want the code to run a bit faster on older browsers, it's an excellent tool
  • Trade-offs are generally worth-while, but just be careful to check things over and not blindly trust it all the time.

In general, your code is faster. You may introduce things that various JIT compilers don't like but they're going to be rare if your code uses smaller functions and correct prototypical object-oriented-design. If you think about the full scope of what the compiler is doing (shorter download AND faster execution) then strange things like var i = true, m = null, r = false; may be a worth-while trade off that the compiler made even though they're running slower, the total lifespan was faster.

It's also worthwhile to note the most common bottle neck in web-app execution is the Document Object model, and I suggest you put more effort over there if your code is slow.

Community
  • 1
  • 1
Incognito
  • 20,537
  • 15
  • 80
  • 120
  • @John I fully understand the statement, but I'm not sure where I was unclear. I did state "Download time is the most important part if you're worried about delivery-- a slow running script can be caused by millions of things that I suspect the tool simply cannot account for." Can you point it out so I can fix it? I mention the DOM at the end of the post as well. – Incognito Nov 10 '11 at 02:24
  • 3
    Excellent answer, but I can't help worrying that the JIT compiler might have more trouble inlining and partially evaluating the "constant" value if it's in a variable, which could conceivably be re-assigned later. – andrewmu Nov 10 '11 at 12:27
  • 3
    @incognito: to quote: "I flatly challenge the first assumption they've made here. The size of vars names ..." We didn't make that assumption. Where did you get the idea that the Closure Compiler equates variable name size with runtime performance? However, the startup time of an application is highly correlated to the time it takes to download the code and there (sadly) every byte counts. This fact has only gotten worse over the last decade as the mobile platforms have become significant with it high latency/low bandwidth/low reliability connects, subpar processors and tiny browser caches. – John Nov 15 '11 at 16:11
  • 2
    I should clarify that barring the strange beginning the general assessment is sound, but I would add that it is difficult to "tune to the JIT" as there are no less than 4 major engines (one for each of the major browsers, with out considering upcoming engines) each of which have different characteristics – John Nov 20 '11 at 02:00
  • @Incognito, Your second last paragraph states "the total lifespan was faster". What is "total lifespan" referring to here? – Pacerier Mar 06 '15 at 09:22
  • @John, Google is very likely tuning to **it's own** JIT (as opposed to JIT's in general) so that Chrome wins its competitors. – Pacerier Mar 06 '15 at 09:25
  • @Pacerier Things have changed a lot in the last 4 years and the JITs in the current browsers all follow the same general model (hidden classes, etc) with some differences in specifics. Most of the tune these days that I see are for the mobile variants (iOS, Android) and not specifically V8. The compiler team follow the same general module: fixing egregious problems in any engine and ignoring minor differences. – John Mar 06 '15 at 23:44
10

It would appear that in modern browsers using the literal true or null vs a variable makes absolutely no difference in almost all cases (as in zero; they are exactly the same). In very few cases, the variable is actually faster.

So, those extra bytes in saving are worth it and cost nothing.

true vs variable (http://jsperf.com/true-vs-variable):

true vs variable

null vs variable (http://jsperf.com/null-vs-variable):

null vs variable

ThinkingStiff
  • 64,767
  • 30
  • 146
  • 239
2

I think there will be a very slight performance penalty, but unlikely to matter much in newer, modern browsers.

Notice that the Closure Compiler's standard alias variables are all global variables. Which means that, in an old browser where the JavaScript engine takes linear time to navigate functional scopes (e.g. IE < 9), the deeper you are within nested function calls, the longer it takes to find that variable which holds "true" or "false" etc. Almost all modern JavaScript engines optimize global variable access so this penalty should no longer hold in many cases.

In addition, there really shouldn't be many places where you'd be seeing "true" or "false" or "null" directly in compiled code, except for assignments or arguments. For example: if (someFlag == true) ... is mostly just written if (someFlag) ... which is compiled by the compiler into a && .... You mostly only see them in assignments (someFlag = true;) and arguments (someFunc(true);), which really do not happen very frequently.

Conclusion is: although many people doubt the usefulness of the Closure Compiler's standard aliases (me included), you shouldn't expect any material performance hit. You also shouldn't expect any material benefits in gzipped sizes, though.

Stephen Chung
  • 14,497
  • 1
  • 35
  • 48
  • FYI: "if (someFlag === true) ..." is not folded to "a && ...". Generally, the Closure Compiler does this kind of rewriting without knowledge of the values that can be held by the variables, and "a === true" could only become simply "a" if its only possible values were "true" and so-called falsy values (false, null, undefined, etc). – John Nov 15 '11 at 16:21
  • @John, you're right. `if (someFlag) ...` is folded into `a && ...`, but `=== true` or `== true` are folded into `a === true && ...` I'll edit my answer. Thanks! – Stephen Chung Nov 16 '11 at 04:38