2

I'm writing a re-sizable array class (std::vector), as an exercise, using manual pointers (because I want to know how they work before I start using smart pointers). However, Valgrind reports a memory leak at the checkMax() function.

template <typename T>
class Array{
 public:
 Array() : len(0),maxLen(1){
    array=new T[maxLen];
    // ........ 
}
Array(unsigned length, T&content=0) : len(length),maxLen(length*2){
    array=new T[maxLen];
    //..............
}
~Array(){
    //delete[] array;
}
//..............
void push_back(const T& content){
    checkMax();
// do stuff...
}
private:
T* array;
unsigned int len;
unsigned int maxLen;
..
void checkMax(){
    if(len==maxLen){
    //allocate more memory for the array
        maxLen*=2;
        T*temp=new T[maxLen]; // ------------- MEMORY LEAK HERE -------------
        for(unsigned int i=0; i<len; i++){
            temp[i]=array[i];
        }
        delete [] array;
        array=temp;
    }
}
};

I have posted only the memory-relevant code here.

I do not understand why Valgrind is reporting a memory leak at the specified line. I do delete the old array after copying the old array contents to the enlarged array, two lines later.

Also, if I un-comment the delete[] function at the destructor, I get an exception double free or corruption and Valgrind reports an Invalid Delete(implying re-deletion), so I am totally flummoxed. Any ideas?

EDIT: Thanks for the early replies! After reading the comments, I found that the problem lies not with my class, but with another function I was calling with the Array class as an argument . If I remove the call to this function, and added the delete call in the class, no memory leaks occur. Here is my function:

template <typename T>
void printContents(Array<T> ar){
for(unsigned int i=0; i<ar.size(); i++){
    cout<<"Content at i in array = " << ar.at(i) << endl;
}
}

After reading the Rule of Three(thanks to chris) and the answer posted by Grizzly, I have now understood why the delete[] was invalid. Because I had not overloaded the copy constructor, shallow copying occured, due to which, my array pointer got assigned to the pointer in ar, and when ar when out of scope, the delete[] was called, thereby making my delete in the main function invalid . Hence, I was getting the exception. If I removed the delete, then the array would obviously remain allocated and result in a memory leak.

Thanks for the help, I have voted Grizzly's answer as correct.

Sumanth
  • 71
  • 6
  • 1
    http://stackoverflow.com/questions/4172722/what-is-the-rule-of-three – chris May 25 '13 at 13:09
  • To elaborate on the rule of three comment, are you copying from one Array to another, which might result in the double-delete? – Scott Jones May 25 '13 at 13:15
  • @chris Yes, that was what I was thinking too. Sumanth, are you passing `Array` by value as a parameter anyplace, or creating copies in another way? Do you have a copy constructor that handles the `array` pointer properly? If you're missing the copy constructor, the default behavior would be to just copy the pointer, which means you'd have two pointers to the same `array`, hence a double delete. –  May 25 '13 at 13:16

1 Answers1

2

You are leaking memory by not calling delete[] in the destructor. That means that the array, which is allocated when you call checkMax for the last time on an object(the "final" array in the object) is not called ever. So to solve that you should uncomment the delete[] in the destructor

As you mentioned this will give you a problem with double free. That is based on a violation of the rule of three (well in C++03 it is the rule of three, in C++11 the situation isn't as clear cut, but for this scenario the rule of three is enough to solve your problem). The rule of three basically states, that when you define a custom destructor, copy constructor or assignment operator, you need to define all of them. In this case you are likely doing a copy of the Array somewhere. That means that both instances will contain the same pointer and try to delete it in the destructor. Calling delete on the same pointer twice is an error. To solve that problem you need to define a custom copy-constructor and assignment op, in which you do a deep copy of the array (allocate new memory and copy the contents over).

All things aside, personally I would suggest starting with smartpointers and only later on, when you have a better grip on C++, start experimenting with manual memory management.

Grizzly
  • 19,595
  • 4
  • 60
  • 78
  • 1
    Good answer, but I disagree with the last point of your post, I think manual memory management should be understood when you're learning, as the goal is to understand, not make things work. (And we probably all learned about this by doing exactly what the poster just did at some point!) –  May 25 '13 at 14:04
  • @wilsonmichaelpatrick: I'm not argueing that one should never learn about manual memory management, just that one shouldn't start with that (as the OP seems to be doing). C++ is hard enough even without considering manual memory management and smart pointers wouldn't magically solve all of the issues, but just serve to make them more apparent (`unique_pointer` would simply forbid the copy, leading directly to the question of why and what to do) – Grizzly May 25 '13 at 14:19