1

When assigning values to a large array the used memory keeps increasing even though no new memory is allocated. I am checking the used memory simply by the task manager (windows) or system monitor (Ubuntu).

The Problem is the same on both OS. I am using gcc 4.7 or 4.6 respectively.

This is my code:

#include <stdio.h>
#include <stdlib.h>

int main(int argc, char *argv[]) {
    int i,j;
    int n=40000000;   //array size
    int s=100;
    double *array;

    array=malloc(n*sizeof(double));     //allocate array
    if(array==NULL){
        return -1;
    }

    for(i=0;i<n;i++){   //loop for array, memory increases during this loop
        for(j=0;j<s;j++){   //loop to slow down the program
            array[i] = 3.0;
        }
    }
    return 0;
}

I do not see any logical Problem, but to my knowledge I do not exceed any system limits either. So my questions are:

  • can the problem be reproduced by others?

  • what is the reason for the growing memory?

  • how do I solve this issue?

  • I can see there is no array! – haccks Feb 13 '14 at 20:50
  • Yes there is, it's called `array`. – abligh Feb 13 '14 at 20:50
  • @abligh; Do you mean the variable name `array`? I am talking about the data structure. – haccks Feb 13 '14 at 20:53
  • Also see http://stackoverflow.com/q/131303/13422 – Zan Lynx Feb 13 '14 at 20:56
  • @haacks: he is `malloc()`-ing `n` doubles, which he is then accessing as an array - see `array[i]`. That's how you *dynamically* allocate an array in C. – abligh Feb 13 '14 at 20:57
  • @abligh; First, its called **dynamic memory allocation**, not dynamic array allocation. Second, **arrays are not pointers**. Read [c-faq: Arrays and Pointers](http://www.c-faq.com/aryptr/index.html). – haccks Feb 13 '14 at 21:01
  • @haacks. I did not say "dynamic array allocation". He is doing "dynamic allocation" of n-doubles, and accessing that as an array. I know arrays are not pointers, thanks, having spent over 20 years programming in C. That doesn't change the fact that this is how you dynamically allocate an array in C (as opposed to C++ which has `new[]`). – abligh Feb 13 '14 at 21:04
  • @abligh; You said: *That's how you dynamically allocate an array in C*. Also read [this](http://djmnet.org/lore/arrays-are-not-pointers.txt). – haccks Feb 13 '14 at 21:06
  • @haacks yes, he's dynamically allocating an array. Dynamically allocating memory for an array, if you prefer. I did not call what he's doing "dynamic array allocation". The OP is obviously a newbie. How on earth do you think your comment helps with his question? Or are you just trying to indicate you know more C than him? – abligh Feb 13 '14 at 21:08
  • @abligh; *That doesn't change the fact that this is how you dynamically allocate an array in C*: No. This is a fact that although they are looks very similar but are much different. *How on earth do you think your comment helps? *: I provided c-faq link for that. (and its `haccks` :) ). – haccks Feb 13 '14 at 21:11
  • 1
    @haccks (right this time), the faq doesn't answer his question. And, the 'dynamically allocate [an] array' construction is common, for instance: http://stackoverflow.com/questions/455960/dynamic-allocating-array-of-arrays-in-c and (more generally) http://bit.ly/1fkDTc9 – abligh Feb 13 '14 at 21:18
  • @abligh; *Or are you just trying to indicate you know more C than him?*: Oh please! I am also a newbie! Just trying to learn programming :). The aim of my comment was to introduce OP that arrays are not pointers . – haccks Feb 13 '14 at 21:18
  • @abligh; OK. Peace :) – haccks Feb 13 '14 at 21:23

2 Answers2

5

When modern systems 'allocate' memory, the pages are not actually allocated within physical RAM. You will get a virtual memory allocation. As you write to those pages, a physical page will be taken. So the virtual RAM taken will be increased when you do the malloc(), but only when you write the value in will the physical RAM be taken (on a page by page basis).

abligh
  • 24,573
  • 4
  • 47
  • 84
3

You should see the virtual memory used increase immediately. After that the RSS, or real memory used will increment as you write into the newly allocated memory. More information at How to measure actual memory usage of an application or process?

This is because memory allocated in Linux and on many other operating systems, isn't actually given to your program until you use it.

So you could malloc 1 GB on a 256 MB machine, and not run out of memory until you actually tried to use all 1 GB.

In Linux there is a group of overcommit settings which changes this behavior. See Cent OS: How do I turn off or reduce memory overcommitment, and is it safe to do it?

Community
  • 1
  • 1
Zan Lynx
  • 53,022
  • 10
  • 79
  • 131
  • The OS definitely allows you to allocate more virtual memory than is physically available to begin with? – OJFord Feb 13 '14 at 20:53
  • @OllieFord: By default yes. Then when the program uses all the available physical and swap memory it will get hammered with a OOM kill. Out Of Memory. But Linux lets you change this to strict mode if you want it. – Zan Lynx Feb 13 '14 at 20:55
  • @ZanLynx is correct, save for the fact that even if you disable overcommit, the resident size does not equal the virtual size when you first allocate. It just prevents the sum of allocated virtual sizes exceeding physical memory. So *technically* it does not change the behaviour (virtual/physical difference in allocation). – abligh Feb 13 '14 at 20:59
  • @ZanLynx Interesting - thanks. Do you know why it is configured this way? Seems redundant. I could understand if other programs were currently using this memory - but if it's more than actually exists it seems worthless to allow? – OJFord Feb 13 '14 at 21:21
  • 1
    @OllieFord: The biggest reason I know of is to implement `fork()`. After a successful fork there are now TWO processes. Without overcommit each one would now require a full commit of all its memory. If it is a large Java server process using 2 GB on a 4 GB machine and it wants to fork/exec `ls`, this would fail without overcommit. – Zan Lynx Feb 13 '14 at 22:20