2

in an effort to only ask what I'm really looking for here... I'm really only concerned if it's considered bad practice or not to declare an array like below where the size could vary. If it is... I would generally malloc() instead.

void MyFunction()
{

    int size;

    //do a bunch of stuff
    size = 10; //but could have been something else
    int array[size];

    //do more stuff...

}
Maximus
  • 8,351
  • 3
  • 29
  • 37
  • You mean `int array[size];` and `const int size = 10;`? – Jacob Jun 07 '10 at 23:01
  • Everyone here is talking heap versus stack, but what about scoping? The title of your post reads like a scoping question, in which case the general consensus is to declare at the innermost scope possible. However, ignoring the title this does read like a heap vs stack question, so you might want to change the title. – Mawg says reinstate Monica Jun 08 '10 at 00:35
  • And a quick search of this site reveals about 400 questions about "heap or stack". Perhaps one of them already answers your question? http://stackoverflow.com/search?q=%22heap+or+stack%22 – Mawg says reinstate Monica Jun 08 '10 at 00:37
  • @Jacob: `const int` can't be used where compile-time constants are expected in C (unlike C++). `enum` can be used instead, however. – jamesdlin Jun 08 '10 at 01:39
  • @Jacob, yes... fixed that. Honestly, I really should have been much clearer, I posted this somewhat in haste leaving work. I'm honestly fine with my casting, etc.. with method 1... the question is more so just if method 2 is bad practice or not. – Maximus Jun 08 '10 at 02:21
  • @Jacob: In C language, `const int size = 10` or just `int size = 10` makes absolutely no difference in this context. – AnT stands with Russia Jun 11 '10 at 01:19

5 Answers5

2

Generally yes, this is bad practice, although new standards allow you to use this syntax. In my opinion you must allocate (on the heap) the memory you want to use and release it once you're done with it. Since there is no portable way of checking if the stack is enough to hold that array you should use some methods that can really be checked - like malloc/calloc & free. In the embedded world stack size can be an issue.

If you are worried about fragmentation you can create your own memory allocator, but this is a totally different story.

INS
  • 10,594
  • 7
  • 58
  • 89
1

For your question, I think each has its advantages and disadvantages.

Dynamic Allocation:
Slow, but you can detect when there is no memory to be given to your programmer by checking the pointer.

Stack Allocation:
Only in C99 and it is blazingly fast but in case of stackoverflow you are out of luck.

In summary, when you need a small array, reserve it on the stack. Otherwise, use dynamic memory wisely.

Khaled Alshaya
  • 94,250
  • 39
  • 176
  • 234
  • 2
    Modern machines will rarely fail malloc if you are actually out of heap space - more likely, it will just kill the process. You have to go through special tweaking to get the "correct" behavior. You can search for overcommit, or here's a blog post about it: http://opsmonkey.blogspot.com/2007/01/linux-memory-overcommit.html Anyway, my point is that malloc may not be much more robust than putting it on the stack. – Steven Schlansker Jun 07 '10 at 23:13
  • I was referring to "but you can detect when there is no memory to be given to your programmer by checking the pointer." – Steven Schlansker Jun 07 '10 at 23:46
  • 1
    I guess stack allocation has another advantage. You need not worry about freeing the memory, no mem leaks :-) – Rohin Jun 08 '10 at 08:19
1

That depends. The first clearly isn't what I'd call "proper", and the second is only under rather limited circumstances.

In the first, you shouldn't cast the return from malloc in C -- doing so can cover up the bug of accidentally omitting inclusion of the correct header (<stdlib.h>).

In the second, you're restricting the code to C99 or a gcc extension. As long as you're aware of that, and it works for your purposes, it's all right, but hardly what I'd call an ideal of portability.

As far as what you're really asking: with the minor bug mentioned above fixed, the first is portable, but may be slower than you'd like. If the second is portable enough for your purposes, it'll normally be faster.

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
  • In cases where the `malloc` is several lines away from the declaration of the pointer variable, *not* casting the result of malloc can cover up a different bug: for example doing `array = malloc(sizeof(short)*size));`. Which bug is more likely / dangerous? I think if you're going to criticise casting, you should equally criticise using `sizeof(int)` where `sizeof(*array)` is safer. – Steve Jessop Jun 07 '10 at 23:34
  • I've always understood to cast the way I have been for portability, but I'll consider the change to sizeof(*array) for sure... thanks for that. – Maximus Jun 08 '10 at 02:23
  • @Steve: you're right, I probably should have mentioned that problem as well. – Jerry Coffin Jun 08 '10 at 05:24
  • @Maximus: The cast is only "more portable" in the sense that with a cast it'll compile as C++, but without one it won't. In almost all cases I think that's a bad thing, though. For most C code, trying to compile it as C++ *should* be an error, because there are other subtle incompatibilities. Normally if you need to call it from C++ you should just compile it as C (with the same compiler, almost always). The exception would be static inline functions defined in headers used from both languages, if you like that sort of thing... – Steve Jessop Jun 10 '10 at 23:46
1

The argument against VLAs runs that because of the absolute badness of overflowing the stack, by the time you've done enough thinking/checking to make them safe, you've done enough thinking/checking to use a fixed-size array:

1) In order to safely use VLAs, you must know that there is enough stack available.

2) In the vast majority of cases, the way that you know there's enough stack is that you know an upper bound on the size required, and you know (or at least are willing to guess or require) a lower bound on the stack available, and the one is smaller than the other. So just use a fixed-size array.

3) In the vast majority of the few cases that aren't that simple, you're using multiple VLAs (perhaps one in each call to a recursive function), and you know an upper bound on their total size, which is less than a lower bound on available stack. So you could use a fixed-size array and divide it into pieces as required.

4) If you ever encounter one of the remaining cases, in a situation where the performance of malloc is unacceptable, do let me know...

It may be more convenient, from the POV of the source code, to use VLAs. For instance you can use sizeof (in the defining scope) instead of maintaining the size in a variable, and that business with dividing an array into chunks might require passing an extra parameter around. So there's some small gain in convenience, sometimes.

It's also easier to miss that you're using a humongous amount of stack, yielding undefined behavior, if instead of a rather scary-looking int buf[1920*1024] or int buf[MAX_IMG_SIZE] you have an int buf[img->size]. That works fine right up to the first time you actually handle a big image. That's broadly an issue of proper testing, but if you miss some possible difficult inputs, then it won't be the first or last test suite to do so. I find that a fixed-size array reminds me either to put in fixed-size checks of the input, or to replace it with a dynamic allocation and stop worrying whether it fits on the stack or not. There is no valid option to put it on the stack and not worry whether it fits...

Steve Jessop
  • 273,490
  • 39
  • 460
  • 699
0

two points from a UNIX/C perspective -

malloc is only slow when you force it to call brk(). Meaning for reasonable arrays it is the same as allocating stack space for a variable. By the way when you use method #2 (via alloca and in the code libc I have seen) also invokes brk() for huge objects. So it is a wash. Note: with #2 & #1 you still have to invoke directly or indirectly a memset-type call to zero the bytes in the array. This is just a side note to the real issue (IMO):

The real issue is memory leaks. alloca cleans up after itself when the function retruns so #2 is less likely to cause a problem. With malloc/calloc you have to call free() or start a leak.

jim mcnamara
  • 16,005
  • 2
  • 34
  • 51
  • Not really true. Malloc has to find free memory in the heap, which can become fragmented, causing slowdowns. Alloca is always going to allocate from the very tip of the stack, which is constant time. – Steven Schlansker Jun 08 '10 at 09:57