0

I see this style a lot in our code base and online as well, where if you have a function with for loops, and if statements, all the variables that only they use and nothing else are declared outside them. For example:

void process()
{
    int i;
    int count = 100;
    vector3 point;
    vector sum;

    for (i = 0; i < count; ++i)
    {
        import(this, "pos", point);
        sum += point;
    }
    sum /= count;
}

Or is this premature optimization? I am curious about it for C++, C# and Python which are the languages I use and where I saw these over and over again.

syntonym
  • 7,134
  • 2
  • 32
  • 45
Joan Venge
  • 315,713
  • 212
  • 479
  • 689
  • 1
    I am strictly familiar with Python (and MEL from Maya), but it seems to me that these variable declarations are because `if`/`for` statements have their own scoping rules in certain languages (not Python, obviously). If you don't declare the variables outside of them, whatever variable assignments you do inside of those statements can't be re-used further down the function. Anyone, feel free to correct me if I'm wrong. – Eithos Feb 21 '15 at 03:01
  • 3
    Sounds like a style inherited from C89, where all variables must be declared at the start. – T.C. Feb 21 '15 at 03:09
  • I'll venture a guess that maybe it's _precisely_ because the function is subject to change. It would make sense to declare variables in such a way that extending the functionality later is easy. Kind of like how certain design patterns don't immediately seem useful until you've reached a certain level of complexity. I think in this case putting the variable declarations at the beginning is just a good habit. Whereas doing otherwise would constantly require fixing each time you tweaked the function. – Eithos Feb 21 '15 at 03:11

3 Answers3

5

A lot of older code does this because it was required in C89/90. Well, to be technical, it was never required that variables be defined at the beginning on the function, only at the beginning of a block. For example:

int f() { 
    int x;   // allowed

    x = 1;
    int y;   // allowed in C++, but not C89

    {
       int z=0;    // beginning of new block, so allowed even in C89

       // code that uses `z` here
    }
}

C++ has never had this restriction (and C hasn't in quite a while either), but for some old habits die hard. For others, maintaining consistency across the code base outweighs the benefits of defining variables close to where they're used.

As far as optimization goes, none of this will normally have any effect at all.

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
1

It makes a difference in python. It's a scoping issue where python will first search through a dictionary containing local variables and then work his way upwards to global and then built-ins.

There is a slight speed increase for this in python, although generally not really a lot. Check THIS question to see more details for python, including some tests.

I can't comment on C++ or C# but because they are compiled languages it shouldn't really matter.

Community
  • 1
  • 1
ljetibo
  • 3,048
  • 19
  • 25
  • Something similar happens in C++. There are blocks `{}` that delimite a scope – Christian Tapia Feb 21 '15 at 03:33
  • Really, I find that a bit strange. I thought compilers could probably optimize and reference a local variable if possible and the task wouldn't fall on CLR if it didn't need to. Color me interested, got a good link for this? – ljetibo Feb 21 '15 at 03:39
  • In python, `if` and `for` blocks do not have their own scope, so it is completely unnecessary to declare the variable outside the loop, even if that were possible. The difference between local and global is not the same as the difference between local and outer local, but in all cases strictly local will win, which is the opposite from the supposition of the question in the OP. – rici Feb 21 '15 at 04:35
  • @rici OP clearly presents a function, and although I can agree with you to a certain measure you are forgetting that python has various implementations. PyPy even shows that running a simple loop while in function works faster than just running a loop because of JIT. I think Cython shows the same. Not to mention that because of `if __name__ == __main__: main()` "hack" which is very often used today this happens more often than not. I therefore maintain that my answer is in the general direction of the OP. However I do agree with you that it's not a universal rule. No optimization ever is. – ljetibo Feb 21 '15 at 05:32
0

It makes no difference. It's on the stack either way.

john
  • 17