5

My program fails with 'std::bad_alloc' error message. The program is scalable, so I've tested on a smaller version with valgrind and there are no memory leaks.

This is an application of statistical mechanics, so I am basically making hundreds of objects, changing their internal data (in this case stl vectors of doubles), and writing to a datafile. The creation of objects lies inside a loop, so when it ends the memory is free. Something like:

for (cont=0;cont<MAX;cont++){
         classSection seccion;
         seccion.GenerateObjects(...);
         while(somecondition){
                seccion.evolve();
                seccion.writedatatofile();
         }}

So there are two variables which set the computing time of the program, the size of the system and the number of runs. There is only crash for big systems with many runs. Any ideas on how to catch this memory problem?

Thanks,

Gabriel
  • 113
  • 5

2 Answers2

4

Run the program under debugger so that it stops once that exception is thrown and you can observe the call stack.

Three most probable problems are:

Community
  • 1
  • 1
sharptooth
  • 167,383
  • 100
  • 513
  • 979
  • 3
    Note that "a request for an unreasonably large block of memory" could also be a request for a negative-sized block of memory, with the negative number getting implicitly cast to `size_t` (which is an unsigned type). – Karl Knechtel Dec 08 '10 at 10:26
  • But such a request would be shown by valgrind as a warning wouldn't it? – Gabriel Dec 08 '10 at 10:41
  • 1
    Thanks sharptool. Any suggestions on how to check if there are too many objects created on heap? – Gabriel Dec 08 '10 at 10:47
  • @Karl Knechtel: Yes, that's one possibility. – sharptooth Dec 08 '10 at 10:49
  • @Gabriel: The best I can imagine is to overload `operator new` and make it increment some global variable (in thread-safe manner of course), but this doesn't look very good. Can't valgrind provide this information? – sharptooth Dec 08 '10 at 10:50
3

valgrind would not show a memory leak because you may well not have one that valgrind would find.

You can actually have memory leaks in garbage-collected languages like Java. Although the memory is cleaned up there, it does not mean a bad programmer cannot hold on indefinitely to data they no longer need (eg building up a hash-map indefinitely). The garbage collector cannot determine that the user does not really need that data anymore.

You may be doing something like that here but we would need to see more of your code.

By the way, if you have a collection that really does have masses of data you are often better off using std::deque rather than std::vector unless you really really need it all to be contiguous.

CashCow
  • 30,981
  • 5
  • 61
  • 92
  • Thanks CashCow. I though the objects where erased each time the loop when around, but I guess there is something that is only thrown away at the end of the program. I can not publish the code here. I'll try the deques also. – Gabriel Dec 08 '10 at 11:23