0

I have this situation where my program starts to slow down and eventually halts. I am guessing it is due to not claiming memory in a right way. Can anyone please help me what is the correct way to free memory in this case?

simplified definitions:

typedef struct {
    std::string name;
    std::vector<metric_t> metrics;
} region_t;


typedef struct {
    std::string name;
    std::vector<region_t> regions;
} data_t;


typedef struct {
    std::string name;
    std::vector<double> means;
} metric_t;

Main loop:

for(int i = 0; i < 100; i++)
{
    data_t data;
    prepare_data(&data);

    /* Usage of data here */
}

prepare data function:

void prepare_data(data_t * data)
{
    region_t new_region;

    data->regions.push_back(new_region);

    for(int j=0; j< 100000; j++)
    {
        metric_t new_metric;
        /* put some data in new_metric */
        data->regions.back().metrics.push_back(new_metric);
    }
}
Cashif Ilyas
  • 1,619
  • 2
  • 14
  • 22
  • 2
    It doesn't look like you need to do anything special to free memory for these vectors. – πάντα ῥεῖ Jun 22 '16 at 06:58
  • 1
    Unrelated note: that `typedef` is pointless. In C++ you write `struct data_t { ... }`. – MSalters Jun 22 '16 at 07:00
  • Possible duplicate of ["Right" way to deallocate an std::vector object](http://stackoverflow.com/questions/3054567/right-way-to-deallocate-an-stdvector-object) – xenteros Jun 22 '16 at 07:01
  • What environment are you running this in? Maybe you need to increase the heap size or something like that. – Ahmed Akhtar Jun 22 '16 at 07:03
  • @AhmedAkhtar I am using Ubuntu 16.04 LTS. If the first few iterations of main loop run okay, doesn't that mean that stack is enough for the `data` structure? – Cashif Ilyas Jun 22 '16 at 07:06
  • No, actually the vector gets bigger and bigger as the loop progresses. What IDE are you using to run the code? – Ahmed Akhtar Jun 22 '16 at 07:09
  • @AhmedAkhtar I am using Eclipse Kepler. Isn't `data` destroyed at the end of each iteration of "main loop" as it is defined inside? So if initial few iteration works okay, doesn't that mean that there is enough room for 100000 elements? – Cashif Ilyas Jun 22 '16 at 07:12
  • Please also post the definition of `metric_t`. – Ahmed Akhtar Jun 22 '16 at 07:21
  • Did the `reserve` suggestion given by **Humam** help? – Ahmed Akhtar Jun 22 '16 at 07:30
  • 2
    @AhmedAkhtar: *"No, actually the vector gets bigger"* - Cashif asked if he could conclude that the stack was enough for the `data` structure, and the correct answer to that is "yes": the stack usage doesn't grow from one loop iteration to the next. When the `vector` gets bigger it increases its dynamic memory (heap) usage. – Tony Delroy Jun 22 '16 at 07:35
  • @TonyD Yes that is right. – Ahmed Akhtar Jun 22 '16 at 07:37
  • @CashifIlyas Were you able to solve the problem? – Ahmed Akhtar Jun 23 '16 at 04:32
  • @AhmedAkhtar No. But I guess I didn't explain the problem correctly here. – Cashif Ilyas Jun 25 '16 at 16:48

4 Answers4

1

The rule is simple: any new must be balanced with a delete, and any new[] balanced with a delete[].

The standard C++ library container classes obey these rules so unless you've written new or new[] yourself (which you haven't), all the memory cleanup will be done for you.

The cause of any performance problem lies elsewhere: I think it's due to your pushing elements to a vector one-by-one a large number of times (which will cause multiple reallocations of memory). Perhaps use a std::list rather than a std::vector?

Don't worry about the fact that data is an automatic variable: sizeof(data) is small.

Also, avoid _t suffixes if you can: POSIX disallows that (although many folk give no quarter since the C++ standard does allow you to do this). Plus there's no need for the typedef around your struct definitions in C++.

Bathsheba
  • 231,907
  • 34
  • 361
  • 483
  • Thank you for your answer. You suggested that allocating memory one by one might be slowing it down. I am not concerned about the speed but the problem is, it works okay for `i` less than about 30 or so and then it starts to halt. – Cashif Ilyas Jun 22 '16 at 07:14
  • I am indeed suggesting that. Try using std::list. – Bathsheba Jun 22 '16 at 07:15
  • 2
    Or use .reserve(100000) to allocate enough memory for all of your data, it will remove overhead with memory reallocations, and you won't have to change your application. – MaciekGrynda Jun 22 '16 at 07:31
  • `std::list` requires a lot more distinct dynamic allocations, with associated overheads - it will use considerably more memory and time than a pre-reserved `vector`, though it's less susceptible to fragmentation-related failures. – Tony Delroy Jun 22 '16 at 07:56
  • The acid test, of course, is to try it and profile it. The *current* approach is not working. – Bathsheba Jun 22 '16 at 07:59
1

You don't seem to have any dynamic memory management (i.e. calls to new or new[]) so there is nothing you need to do. Everything will automatically be freed when the declaration scope ends.

Maybe you have a different problem? You may want to use a profiler and get some data.

nvoigt
  • 75,013
  • 26
  • 93
  • 142
1

You do not have any memory mangment problem. std::vector is doing everything for you. However, This should give you a better performance:

void prepare_data(data_t * data){
    region_t new_region;
    data->regions.push_back(new_region);
    data->regions.back().metrics.reserve(100000);
    for(int j=0; j< 100000; j++){
        metric_t new_metric;
        /* put some data in new_metric */
        data->regions.back().metrics.push_back(new_metric);
        //data->regions.back().metrics.emplace_back(a1,a2,...); if you can construct metric_t directly here it would be better
    }
}
Humam Helfawi
  • 19,566
  • 15
  • 85
  • 160
1

To know if you are leaking memory either use some tool for monitoring process resource usage your operating system has (e. g. htop on linux) or code analyzer (e. g. valgrind on linux).

There might be other issues than pushing back to std::vector without calling reserve(). Default memory buffer growth strategy is exponential.

Jan Korous
  • 666
  • 4
  • 11
  • "is exponential" - what do you mean by that? – Bathsheba Jun 22 '16 at 08:03
  • 1
    @Bathsheba Every time `std::vector` has not enough memory for element to be inserted size of it's buffer is doubled (other factors like 1.5 could be used). It means that buffer is growing exponentially. – Jan Korous Jun 22 '16 at 08:07
  • Interesting. Does the C++ standard *mandate* that? I *thought* it was smarter than that. – Bathsheba Jun 22 '16 at 08:08
  • 1
    @Bathsheba I find it actually pretty smart default strategy. It is not mandated by standard. See e. g. https://stackoverflow.com/questions/5232198/about-vectors-growth – Jan Korous Jun 22 '16 at 08:11