57

On the one hand, I read or hear that "function calls are expensive" and that they impact efficiency (for example, on Nicholas Zakas' Google tech talk).

On the other hand, however, it seems accepted that functions/methods are best kept short and should only really perform one task, as generally accepted in here.

Am I missing something here, or don't these two pieces of advice run contrary to one another? Is there some rule-of-thumb that allows one to maintain a Zen-like balance?

Community
  • 1
  • 1
Nick
  • 5,995
  • 12
  • 54
  • 78
  • 1
    I haven't watched that talk, but if we're talking about micro-optimization then just do whatever you're more comfortable with. If you like functional then go with it. If you're more big class library then fine too. I don't think it will make such a big difference to really take one over the other. Other problems might arise in different situations anyway so they compensate each other. – elclanrs Jun 23 '12 at 11:02
  • 5
    What you're asking is essentially "how to write a good program" - one that a) is readable, but b) performs well. There's no definitive recipe, that's why we programmers still have our bread and butter. – georg Jun 23 '12 at 11:12
  • 5
    A while back, I was worried about the cost of function calls (specifically in my case related to the `Array#forEach` function), so I profiled the cost on the slowest (desktop) browser I could lay my hands on: IE6 running in an old Windows 2000 virtual machine I had. Cost? About 2.78 *microseconds* per function call. You heard, me, *microseconds*. That's 0.00278 milliseconds. Test [here](http://jsperf.com/function-call-cost-on-ie6), blog post [here](http://blog.niftysnippets.org/2012/02/foreach-and-runtime-cost.html). So for me that's "don't worry, be happy" territory. – T.J. Crowder Dec 29 '13 at 14:38
  • 1
    @T.J.Crowder That's really interesting/amusing. I guess relative performance is just that - relative. Blistering fast vs. insanely blistering fast doesn't make the former slow :) – Nick Dec 30 '13 at 01:12
  • 1
    @T.J.Crowder 360 thousands of calls per second could be very insufficient for complex algorithms. – Brian Cannard Dec 12 '14 at 19:03
  • @avesus: Again, that was an old, underpowered Windows 2000 virtual machine running calls on IE6's old, incredibly slow JavaScript interpreter (it didn't do JIT compiling like current ones do). Modern engines would easily be orders of magnitude faster, **and** they'd inline things where possible. Worry about it if and when, because IAGBAP (I just made that up: It Ain't Gonna Be A Problem, kinda like YAGNI...). :-) – T.J. Crowder Dec 12 '14 at 19:09
  • @T.J.Crowder the cost is small, but 1/cost is _finite_. But I really hate my habits which come from my C++ defensive developmnent past. Functions are great thing, especially for JavaScript. I ever mind that using switch..case is bad enough in light of the topic, mind you? So, for any functions with more than 3 if's/elseif's, should we rewrite that with a lookup table? – Brian Cannard Dec 12 '14 at 19:46
  • 1
    @avesus: Let's not get into a discussion here, but I will just point out *inlining* again. Once inlined, the cost is zero. As for the question: *If it makes sense to*, sure; if not, no. The question cannot be reasonably answered in the abstract. – T.J. Crowder Dec 12 '14 at 19:55
  • looks like a [lookup table is less performant](https://jsperf.com/if-switch-lookup-table/5). and yes I'm aware this is an old thread, but this is on chrome (v8) – dkran Feb 02 '16 at 21:47

5 Answers5

50

The general rule applying to all languages is: keep functions (methods, procedures) as small as possible. When you add proper naming, you get very maintainable and readable code where you can easily focus on general picture and drill down to interesting details. With one huge method you are always looking at the details and the big picture is hidden.

This rule applies specifically to clever languages and compiler that can do fancy optimizations like inlining or discovering which methods aren't really virtual so double dispatch isn't needed.

Back to JavaScript - this is heavily dependant on JavaScript engine. In some cases I would expect decent engine to inline function, avoiding the cost of execution, especially in tight loops. However, unless you have a performance problem, prefer smaller functions. Readability is much more important.

Tomasz Nurkiewicz
  • 334,321
  • 69
  • 703
  • 674
  • 23
    +1 Write for readability first, then profile your code if you have efficiency problems and optimize the bottlenecks. – Casey Kuball Jun 23 '12 at 15:05
  • 1
    Keeping functions as small as possible is a very bad advice. Each function makes code structure more complex, so if function is called only from one place it is usually better to inline it in that place with comment. – Alexander Danilov Aug 18 '19 at 14:20
13

In a perfect world, where there's no bugs (because code just fixes itself magically), and requirements are frozen from the day one, it may be possible to live with huge omnipotent functions.

But in this world it turns to be just too expensive - and not only in terms of 'man-month'. Nicholas Zakas wrote a brilliant article describing most of the challenges software developers face these days.

The transition may seem somewhat artificial, but my point is that 'one function - one task' approach is much more maintainable and flexible - in other words, it's what makes BOTH developers and customers happy, in the end.

It doesn't mean, though, that you'd not strive to use as few function calls as possible: just remember that it's not a top priority.

raina77ow
  • 103,633
  • 15
  • 192
  • 229
4

My rule of thumb is that it's time to break a function into smaller pieces if it is more than a screen-full of lines long, though many of my functions just naturally end up somewhat smaller than that without being "artificially" split. And I generally leave enough white-space that even a screen-full isn't really a whole lot of code.

I try to have each function do only one task, but then one task might be "repaint the screen" which would involve a series of sub-tasks implemented in separate functions that in turn might have their own sub-tasks in separate functions.

Having started with what feels natural (to me) for readability (and therefore ease of maintenance) I don't worry about function calls being expensive unless a particular piece of code performs badly when tested - then I'd look at bringing things back in-line (particularly in loops, starting with nested loops). Though having said that sometimes you just know that a particular piece of code isn't going to perform well and rewrite it before getting as far as testing...

I'd avoid "premature optimisation", particularly with languages that use smart compilers that might do those same optimisations behind the scenes. When I first started C# I was told that breaking code up into smaller functions can be less expensive at run-time because of the way the JIT compiler works.

Going back to my one screen-full rule, in JavaScript it is common to have nested functions (due to the way JS closures work), and this can make the containing function longer than I'd like if I were using another language, so sometimes the end result is a compromise.

nnnnnn
  • 147,572
  • 30
  • 200
  • 241
2

To all: This has more the feel of a "comment". Acknowledged. I chose to use the space of an "answer". Please tolerate.

@StefanoFratini: Please take my note as building on your work. I want to avoid being critical.

Here are two ways to further improve the code in your post:

  • Use both halves of the tuple coming from process.hrtime(). It returns an array [seconds, nanoseconds]. Your code uses the nanoseconds part of the tuple (element 1) and I can't find that it uses the seconds part (element 0).
  • Be explicit about units.

Can I match my bluster? Dunno. Here's a development of Stephano's code. It has flaws; I won't be surprised if someone tells me about it. And that'd be okay.

"use strict";

var a = function(val) { return val+1; }

var b = function(val) { return val-1; }

var c = function(val) { return val*2 }

var time = process.hrtime();

var reps = 100000000

for(var i = 0; i < reps; i++) { a(b(c(100))); }

time = process.hrtime(time)
let timeWith = time[0] + time[1]/1000000000
console.log(`Elapsed time with function calls: ${ timeWith } seconds`);

time = process.hrtime();
var tmp;
for(var i = 0; i < reps; i++) { tmp = 100*2 - 1 + 1; }

time = process.hrtime(time)
let timeWithout = time[0] + time[1]/1000000000
console.log(`Elapsed time without function calls: ${ timeWithout } seconds`);

let percentWith = 100 * timeWith / timeWithout
console.log(`\nThe time with function calls is ${ percentWith } percent\n` +
    `of time without function calls.`)

console.log(`\nEach repetition with a function call used roughly ` +
        `${ timeWith / reps } seconds.` +
    `\nEach repetition without a function call used roughly ` +
        `${ timeWithout / reps } seconds.`)

It is clearly a descendent of Stephano's code. The results are quite different.

Elapsed time with function calls: 4.671479346 seconds
Elapsed time without function calls: 0.503176535 seconds

The time with function calls is 928.397693664312 percent
of time without function calls.

Each repetition with a function call used roughly 4.671479346e-8 seconds.
Each repetition without a function call used roughly 5.0317653500000005e-9 seconds.

Like Stephano, I used Win10 and Node (v6.2.0 for me).

I acknowledge the arguments that

  • "For perspective, in a nanosecond (a billionth, 1e-9), light travels roughly 12 inches."
  • "We're only talking about small numbers of nanoseconds (47 to 5), so who cares about percentages?"
  • "Some algorithms make zillions of function calls each second, so it adds up for them."
  • "Most of us developers don't work with those algorithms, so worrying about the number of function calls is counterproductive for most of us."

I'll hang my hat on the economic argument: My computer and the one before it each cost less than $400 (US). If a software engineer earns something like $90 to $130 per hour, the value of their time to their bosses is at a ratio of one computer like mine to three or four hours of their work. In that environment:

How does that compare to the dollars per hour a company loses when software it needs stops working?

How does that compare to lost good will and prestige when a paying customer temporarily can't use shrink-wrapped software produced by a business partner?

There many other such questions. I'll omit them.

As I interpret the answers: Readability and maintainability reign over computer performance. My advice? Write the first version of your code accordingly. Many people I respect say short functions help.

Once you finish your code and don't like the performance, find the choke points. Many people I respect say those points are never where you would have expected them. Work 'em when you know 'em.

So both sides are right. Some.

Me? I'll guess I'm off somewhere. Two cents.

BaldEagle
  • 918
  • 10
  • 18
  • 1
    @StefanoFratini 's "performance test" tests performance of 1000000000 x `tmp = 200` vs 1000000000 x `a(b(c(100)))`. Yes, difference would be rather astounding, obviously. Testing performance on any aspect of any modern language without turning off optimisations of said language's compiler is kinda moot point. I agree with you, the only viable approach is to write clean and maintainable code, and apply optimizations _only_ to code, that proved to be ineficient in testing stage and/or in production. – ankhzet Sep 07 '18 at 05:28
1

Function calls are always expensive (especially in for cycles) and inlining doesn't happen as often as you may think

The V8 engine that ships with Node.js (any version) is supposed to do inlining extensively but in practical terms this capability is greatly constrained.

The following (trivial) snippet of code proves my point (Node 4.2.1 on Win10x64)

"use strict";

var a = function(val) {
  return val+1;
}

var b = function(val) {
  return val-1;
}

var c = function(val) {
  return val*2
}

var time = process.hrtime();

for(var i = 0; i < 100000000; i++) {
  a(b(c(100)));
}

console.log("Elapsed time function calls: %j",process.hrtime(time)[1]/1e6);

time = process.hrtime();
var tmp;
for(var i = 0; i < 100000000; i++) {
  tmp = 100*2 + 1 - 1;
}

console.log("Elapsed time NO function calls: %j",process.hrtime(time)[1]/1e6);

Results

Elapsed time function calls: 127.332373
Elapsed time NO function calls: 104.917725

+/- 20% performance drop

One would have expected the V8 JIT compiler to inline those functions but in reality a, b or c could be called somewhere else in the code and are not good candidate for the low hanging fruit inlining approach you get with V8

I've seen plenty of code (Java, Php, Node.js) having poor performance in production because of method or function call abuse: if you write your code Matryoshka style, run time performance will degrade linearly with the invocation stack size, despite looking conceptually clean.

Stefano Fratini
  • 3,741
  • 2
  • 18
  • 14
  • 1
    Did you run this more than once? Running this on node 14 I'm seeing 2% difference between the two. – Alexis Tyler Feb 18 '22 at 11:25
  • My reply is from 2015 :) as runtimes become more efficient you will always see differences. Nevertheless node.js (as an interpreted language with just in time compilation) will never outperform a statically compiled language. – Stefano Fratini Jun 21 '22 at 01:10
  • Im sorry but a language being compiled doesn't inherently make it any faster. There are languages use JIT that will outperform certain compiled languages. Same goes in the other direction. Blanket statements don't really help here. – Alexis Tyler Jun 21 '22 at 06:01