97

I've been wondering, is there a performance difference between using named functions and anonymous functions in Javascript?

for (var i = 0; i < 1000; ++i) {
    myObjects[i].onMyEvent = function() {
        // do something
    };
}

vs

function myEventHandler() {
    // do something
}

for (var i = 0; i < 1000; ++i) {
    myObjects[i].onMyEvent = myEventHandler;
}

The first is tidier since it doesn't clutter up your code with rarely-used functions, but does it matter that you're re-declaring that function multiple times?

nickf
  • 537,072
  • 198
  • 649
  • 721
  • 1
    I know it isn't in the question, but with regards to code-cleanliness/legibility I think the 'right way' is somewhere in the middle. "Clutter" of rarely-used top-level functions is annoying, but so is heavily-nested code that depends a lot on anonymous functions that are declared in-line with their invocation (think node.js callback hell). Both the former and the latter can make debugging/execution tracing difficult. – Zac B Aug 01 '12 at 14:58
  • 3
    The performance tests below run the function for thousands of iterations. Even if you see a substantial difference, a majority of the use cases won't be doing this in iterations of that order. Hence it's better to choose whatever suits your needs and ignore performance for this particular case. – user Jul 01 '14 at 16:11
  • @nickf of course its too old question, but see the new updated answer – Chandan Pasunoori Dec 16 '14 at 08:29

12 Answers12

95

The performance problem here is the cost of creating a new function object at each iteration of the loop and not the fact that you use an anonymous function:

for (var i = 0; i < 1000; ++i) {    
    myObjects[i].onMyEvent = function() {
        // do something    
    };
}

You are creating a thousand distinct function objects even though they have the same body of code and no binding to the lexical scope (closure). The following seems faster, on the other hand, because it simply assigns the same function reference to the array elements throughout the loop:

function myEventHandler() {
    // do something
}

for (var i = 0; i < 1000; ++i) {
    myObjects[i].onMyEvent = myEventHandler;
}

If you were to create the anonymous function before entering the loop, then only assign references to it to the array elements while inside the loop, you will find that there is no performance or semantic difference whatsoever when compared to the named function version:

var handler = function() {
    // do something    
};
for (var i = 0; i < 1000; ++i) {    
    myObjects[i].onMyEvent = handler;
}

In short, there is no observable performance cost to using anonymous over named functions.

As an aside, it may appear from above that there is no difference between:

function myEventHandler() { /* ... */ }

and:

var myEventHandler = function() { /* ... */ }

The former is a function declaration whereas the latter is a variable assignment to an anonymous function. Although they may appear to have the same effect, JavaScript does treat them slightly differently. To understand the difference, I recommend reading, “JavaScript function declaration ambiguity”.

The actual execution time for any approach is largely going to be dictated by the browser's implementation of the compiler and runtime. For a complete comparison of modern browser performance, visit the JS Perf site

MisterJames
  • 3,306
  • 1
  • 30
  • 48
Atif Aziz
  • 36,108
  • 16
  • 64
  • 74
  • You forgot the parentheses before the function body. I just tested it, they are required. – Chinoto Vokro Jun 10 '14 at 17:14
  • it seems that the benchmark results are very js-engine dependent! – aleclofabbro Dec 17 '14 at 11:44
  • 3
    Isn't there a flaw in the JS Perf example: Case 1 only _defines_ the function, whereas case 2 & 3 seem to accidentally _call_ the function. – bluenote10 Jun 17 '17 at 08:43
  • So using this reasoning, does it mean that when developing ```node.js``` web applications, it's better to create the functions outside the request flow, and pass them as callbacks, than to create anonymous callbacks? – Xavier Mukodi Jul 26 '20 at 16:43
23

Here's my test code:

var dummyVar;
function test1() {
    for (var i = 0; i < 1000000; ++i) {
        dummyVar = myFunc;
    }
}

function test2() {
    for (var i = 0; i < 1000000; ++i) {
        dummyVar = function() {
            var x = 0;
            x++;
        };
    }
}

function myFunc() {
    var x = 0;
    x++;
}

document.onclick = function() {
    var start = new Date();
    test1();
    var mid = new Date();
    test2();
    var end = new Date();
    alert ("Test 1: " + (mid - start) + "\n Test 2: " + (end - mid));
}

The results:
Test 1: 142ms Test 2: 1983ms

It appears that the JS engine doesn't recognise that it's the same function in Test2 and compiles it each time.

nickf
  • 537,072
  • 198
  • 649
  • 721
  • 3
    In which browser was this test conducted? – andynil Nov 24 '10 at 09:33
  • 5
    Times for me on Chrome 23: (2ms/17ms), IE9: (20ms/83ms), FF 17: (2ms/96ms) – Davy8 Dec 28 '12 at 16:29
  • Your answer deserves more weight. My times on Intel i5 4570S: Chrome 41 (1/9), IE11 (1/25), FF36 (1/14). Clearly the anonymous function in a loop performs worse. – ThisClark Mar 31 '15 at 16:34
  • 8
    This test isn't as useful as it appears. In neither example is the interior function actually being executed. Effectively all this test is showing is that creating a function 10000000 times is faster than creating a function once. – Owen Allen Jul 09 '15 at 01:10
  • This test is showing that engine doesn't recognize that newly created function is always the same. But the difference is smaller in 2021. – licancabur Nov 29 '21 at 09:52
3

As a general design principle, you should avoid implimenting the same code multiple times. Instead you should lift common code out into a function and execute that (general, well tested, easy to modify) function from multiple places.

If (unlike what you infer from your question) you are declaring the internal function once and using that code once (and have nothing else identical in your program) then an anonomous function probably (thats a guess folks) gets treated the same way by the compiler as a normal named function.

Its a very useful feature in specific instances, but shouldn't be used in many situations.

Tom Leys
  • 18,473
  • 7
  • 40
  • 62
1

Where we can have a performance impact is in the operation of declaring functions. Here is a benchmark of declaring functions inside the context of another function or outside:

http://jsperf.com/function-context-benchmark

In Chrome the operation is faster if we declare the function outside, but in Firefox it's the opposite.

In other example we see that if the inner function is not a pure function, it will have a lack of performance also in Firefox: http://jsperf.com/function-context-benchmark-3

1

I wouldn't expect much difference but if there is one it will likely vary by scripting engine or browser.

If you find the code easier to grok, performance is a non-issue unless you expect to call the function millions of times.

Joe Skora
  • 14,735
  • 5
  • 36
  • 39
0

As pointed out in the comments to @nickf answer: The answer to

Is creating a function once faster than creating it a million times

is simply yes. But as his JS perf shows, it is not slower by a factor of a million, showing that it actually gets faster over time.

The more interesting question to me is:

How does a repeated create + run compare to create once + repeated run.

If a function performs a complex computation the time to create the function object is most likely negligible. But what about the over head of create in cases where run is fast? For instance:

// Variant 1: create once
function adder(a, b) {
  return a + b;
}
for (var i = 0; i < 100000; ++i) {
  var x = adder(412, 123);
}

// Variant 2: repeated creation via function statement
for (var i = 0; i < 100000; ++i) {
  function adder(a, b) {
    return a + b;
  }
  var x = adder(412, 123);
}

// Variant 3: repeated creation via function expression
for (var i = 0; i < 100000; ++i) {
  var x = (function(a, b) { return a + b; })(412, 123);
}

This JS Perf shows that creating the function just once is faster as expected. However, even with a very quick operation like a simple add, the overhead of creating the function repeatedly is only a few percent.

The difference probably only becomes significant in cases where creating the function object is complex, while maintaining a negligible run time, e.g., if the entire function body is wrapped into an if (unlikelyCondition) { ... }.

bluenote10
  • 23,414
  • 14
  • 122
  • 178
0

What will definitely make your loop faster across a variety of browsers, especially IE browsers, is looping as follows:

for (var i = 0, iLength = imgs.length; i < iLength; i++)
{
   // do something
}

You've put in an arbitrary 1000 into the loop condition, but you get my drift if you wanted to go through all the items in the array.

Sarhanis
  • 1,577
  • 1
  • 12
  • 19
0

@nickf

That's a rather fatuous test though, you're comparing the execution and compilation time there which is obviously going to cost method 1 (compiles N times, JS engine depending) with method 2 (compiles once). I can't imagine a JS developer who would pass their probation writing code in such a manner.

A far more realistic approach is the anonymous assignment, as in fact you're using for your document.onclick method is more like the following, which in fact mildly favours the anon method.

Using a similar test framework to yours:


function test(m)
{
    for (var i = 0; i < 1000000; ++i) 
    {
        m();
    }
}

function named() {var x = 0; x++;}

var test1 = named;

var test2 = function() {var x = 0; x++;}

document.onclick = function() {
    var start = new Date();
    test(test1);
    var mid = new Date();
    test(test2);
    var end = new Date();
    alert ("Test 1: " + (mid - start) + "ms\n Test 2: " + (end - mid) + "ms");
}
splattne
  • 102,760
  • 52
  • 202
  • 249
annakata
  • 74,572
  • 17
  • 113
  • 180
0

a reference is nearly always going to be slower then the thing it's refering to. Think of it this way - let's say you want to print the result of adding 1 + 1. Which makes more sense:

alert(1 + 1);

or

a = 1;
b = 1;
alert(a + b);

I realize that's a really simplistic way to look at it, but it's illustrative, right? Use a reference only if it's going to be used multiple times - for instance, which of these examples makes more sense:

$(a.button1).click(function(){alert('you clicked ' + this);});
$(a.button2).click(function(){alert('you clicked ' + this);});

or

function buttonClickHandler(){alert('you clicked ' + this);}
$(a.button1).click(buttonClickHandler);
$(a.button2).click(buttonClickHandler);

The second one is better practice, even if it's got more lines. Hopefully all this is helpful. (and the jquery syntax didn't throw anyone off)

matt lohkamp
  • 2,174
  • 3
  • 28
  • 47
0

YES! Anonymous functions are faster than regular functions. Perhaps if speed is of the utmost importance... more important than code re-use then consider using anonymous functions.

There is a really good article about optimizing javascript and anonymous functions here:

http://dev.opera.com/articles/view/efficient-javascript/?page=2

Christopher Tokar
  • 11,644
  • 9
  • 38
  • 56
0

@nickf

(wish I had the rep to just comment, but I've only just found this site)

My point is that there is confusion here between named/anonymous functions and the use case of executing + compiling in an iteration. As I illustrated, the difference between anon+named is negligible in itself - I'm saying it's the use case which is faulty.

It seems obvious to me, but if not I think the best advice is "don't do dumb things" (of which the constant block shifting + object creation of this use case is one) and if you aren't sure, test!

annakata
  • 74,572
  • 17
  • 113
  • 180
-1

Anonymous objects are faster than named objects. But calling more functions is more expensive, and to a degree which eclipses any savings you might get from using anonymous functions. Each function called adds to the call stack, which introduces a small but non-trivial amount of overhead.

But unless you're writing encryption/decryption routines or something similarly sensitive to performance, as many others have noted it's always better to optimize for elegant, easy-to-read code over fast code.

Assuming you are writing well-architected code, then issues of speed should be the responsibility of those writing the interpreters/compilers.

pcorcoran
  • 7,894
  • 6
  • 28
  • 26
  • Congrats, your answer is featured as the factually correct one when Google searching `js are anonymous functions slower than named functions`. - https://imgur.com/a/x1vwZve – aggregate1166877 Nov 07 '22 at 07:22