0

I have a program that involves a lot of calls to a handful of functions, each of which locally allocates fixed sized arrays (about a few hundred bytes total). Is it correct to assume that moving all the allocations to main and then passing pointers will get better speed? In other words, does subtracting from the stack pointer take linear or constant time, and, if it takes constant time, what's the cost compared to passing a pointer to a function?

I did a small speed test. Example #1 runs a little faster.

Example #1

using namespace std;
#include <iostream>
int f(int* a){

    // do stuff

    return 0;
}

int main(){

    int a[1000];

    int x;
    for (int i = 0; i < 50000; ++i){
        x=f(a);
    }
    return 0;
}

Example #2

using namespace std;
#include <iostream>

int f(){

    int a[1000];

    // do stuff...

   return 0;
}

int main(){

    for (int i = 0; i < 50000; ++i){
        x=f();
    }
    return 0;
}
Nathan Schmidt
  • 374
  • 1
  • 4
  • 16
  • can someone explain what he is trying to ask? – Irrational Person Apr 18 '15 at 07:04
  • I mean, if you declare a fixed size array inside a function, is it sometimes good to move the declaration to main and then pass a pointer to the function so you don't have to keep allocating space for each call? Or, is it better to keep the stack small when the function's not being used – Nathan Schmidt Apr 18 '15 at 07:11
  • 1
    maybe. maybe not. we don't know. performance can't be guessed, it has to be measured by profiling. but until it matters, don't worry about it. like, at all. – The Paramagnetic Croissant Apr 18 '15 at 07:24
  • No, it's not correct to assume that. Stack space "allocation" is usually a single subtraction operation. And none of the things you're considering is "static". – molbdnilo Apr 18 '15 at 07:27
  • 1
    In general, many think it is good style to define variables in the narrowest scope possible (while avoiding recalculation of same value or data, of course). So moving the array outwards to *another function* is kinda going the wrong way... – hyde Apr 18 '15 at 07:58

2 Answers2

5

You seem to understand allocation of local's space as expensive when in fact it isn't (it's just a substraction from the stack pointer).

Considering the mess you'd probably make with pointers back-referencing "semi-global" local variables in main(), I can't see any real value in what you propose, although it's certainly possible to come up with a special example that proves me wrong.

In general, trying to optimize in early stages of coding is a bad idea. Especially if you trade simpleness and easy reading/understanding for (questionable) efficiency.

Try to code as simple and straightforward as possible. Optimize at later stage if necessary and not before you clearly identified bottlenecks (which is not easy).

mfro
  • 3,286
  • 1
  • 19
  • 28
  • ok, that's helpful. I have a program that calls some functions millions of times and was going to move all the local declarations to main (about 40 arrays). – Nathan Schmidt Apr 18 '15 at 07:50
  • then it's probably worth to ensure you are providing the compiler all the necessary information it needs for proper inlining (e.g. do not define those functions in separate translation units, ensure that funtions have been seen before you call them, etc). You don't need to declare those functions inline, just give the compiler everything it needs to decide itself. This doesn't save local variable space but might save a lot of parameter passing and function call overhead. And it doesn't contradict to what I said above since it doesn't affect clarity and readability. – mfro Apr 18 '15 at 08:01
  • just two questions about subtracting from the stack pointer: 1) does it take linear or constant time? 2) generally, is it cheaper than passing a pointer to a function? – Nathan Schmidt Apr 20 '15 at 07:07
  • @NathanSchmidt It's constant time and ridiculously cheap, often just arithmetic on a stack pointer register. The one time where what you were doing might really pay off with an optimized production build is when your objects involve the heap (ex: std::vector). And I'd still recommend against it without a profiler in your hand and some good measurements. –  May 06 '15 at 19:58
0

There is no difference between the two the way you have written them.

On some systems large allocations on the stack can cause problems but [1000] is a relatively small array and you are never allocating more than one of them.

Consider the case where f() is a recursive function. Then it would be possible to have large, repeated allocations.

user3344003
  • 20,574
  • 3
  • 26
  • 62