1

when running this code:

#include <iostream>
#include <vector>
#include <deque>

template< typename C  > 
void fillToMax( C & collection, typename C::value_type value )
{
     try
      {
         while( true )
           collection.push_back( value );

      }
    catch( std::bad_alloc const& )
      {
        std::cout << "bad alloc with size " << collection.size() << std::endl;
      }
      return;
}

void fillVector()
{
     std::vector<long> vecL;
     fillToMax( vecL, 123 );
}

void fillDeque()
{
      std::deque<long> deqL;
      fillToMax( deqL, 123 );
}

int main()
{
     fillVector();
     fillDeque();
}

I get an expected bad_alloc error, therefore that is easy to try/catch. The problem is when I substitute vector with deque, in this case my machine just crashes... blackscreen, reboots and when up again claims: you had an unexpected problem!

I would like to use deque instead of vector to store a larger amount of items without the issue of contiguous space. This will enable me to store more data but I cannot afford for my application to crash and would like to know how I can get this to bad_alloc instead.

Is this possible?

My tests use MinGW-W64 - gcc version 4.8.2 (x86_64-posix-seh-rev4) on win8.1

Xarylem
  • 398
  • 3
  • 12
  • I think that's intentional - he wants to get a bad_alloc. He just wants to catch the error – chrisb2244 Oct 02 '14 at 15:24
  • What the reason of a crash with deque? Have you tried to use a debugger? – kraskevich Oct 02 '14 at 15:25
  • 2
    See [this](http://ideone.com/3c5VL9) – P0W Oct 02 '14 at 15:27
  • 2
    Possibly related: http://stackoverflow.com/questions/2567683/why-does-my-program-occasionally-segfault-when-out-of-memory-rather-than-throwin – Christian Hackl Oct 02 '14 at 15:32
  • 2
    So this is just a guess, but the reason you are getting the crash and not the exception is that deques allocate the objects one at a time in a list, while a vector allocates the objects as a block, trying to double the container size when you get too large. In the case of the vector, this all-at-once allocation will catch when you are too large with a little bit of overhead, so the system remains relatively stable. When you sneak up on the limit with deque, you'll still use most of the system's memory up by the time you run out, and that'll make the whole OS unstable. – IdeaHat Oct 02 '14 at 15:35
  • I guess the moral of the story is that catching `std::bad_alloc` is well-intentioned, but does not quite work in practice. – Christian Hackl Oct 02 '14 at 15:36
  • http://www.cplusplus.com/reference/deque/deque/push_back/ - it just means your compiler is not fully compliant – Fox Oct 02 '14 at 15:40
  • 1
    @ChristianHackl It can be made to work in practice _if_ the system is not broken. (I've done it under Solaris.) – James Kanze Oct 02 '14 at 15:47
  • @JamesKanze: Would you say that it is worth the trouble in general? Or does it depend on the application area (as in: important for system-critical software, not so important for desktop GUIs)? – Christian Hackl Oct 02 '14 at 15:51
  • 1
    This is NOT a bad question and should not be closed. The user wants to know how to workaround this problem, being able to use deque for a collection that might grow out of hand and catch a bad_alloc. – CashCow Oct 02 '14 at 15:55
  • @Xarylem I have modified your question to make it look better, because I think it is a very good question. – CashCow Oct 02 '14 at 16:11
  • Thank you! I am happy if my (unfortunate) issue can somehow help other people as well. (I guess you forgot to include deque) – Xarylem Oct 02 '14 at 16:16
  • 1
    define "my machine just crashes" – Lightness Races in Orbit Oct 02 '14 at 16:16
  • I use: gcc version 4.8.2 (x86_64-posix-seh-rev4, Built by MinGW-W64 project). My machine (with win8.1) just crashes = blackscreen, reboots and when up again claims: you had an unexpected problem! (very helpful :)) – Xarylem Oct 02 '14 at 16:20
  • Your OS should never ever do that on an out of memory situation unless the OS has a bug or you have a hardware problem. Have you disabled the pagefile? – drescherjm Oct 02 '14 at 16:26
  • No, I haven't disabled it. – Xarylem Oct 02 '14 at 16:29
  • I am testing this with Visual Studio 2010 under Win8.1 x64. The first try I got bad alloc on both but forgot to make an x64 build so that happened around 2GB. Will test again on x64. – drescherjm Oct 02 '14 at 16:35
  • 1
    @ChristianHackl It depends on the application. For most applications, it's more trouble than it's worth (and in many cases, the _only_ thing which could cause `std::bad_alloc` would be a memory leak, so catching won't help). I've worked on servers, however, where specific input requests could require more memory than available. We'd catch `bad_alloc` at the top level of the request, with destructors freeing all of the memory for the request, and report an insufficient resources error. – James Kanze Oct 02 '14 at 16:46
  • I did not get an OS crash or bad alloc however after 30 minutes of disk thrashing and no way to kill the application (no response at all from the keyboard / mouse could move but not click on anything and all display were not updating) I had to push the power button. – drescherjm Oct 02 '14 at 17:06
  • @drescherjm If that's the case, you've got a problem with the system. (I know that I had that problem at times with Solaris 2.2, but it was fixed in 2.4.) – James Kanze Oct 02 '14 at 17:46

3 Answers3

4

You don't say what system you're using, so it's hard to say, but some systems "overcommit", which basically makes a conforming implementation of C++ (or even C) impossible; the system will say that there is memory available when there isn't, and crash when you try to use it. Linux is the most widely documented culprit here, but you can reconfigure it to work correctly.

The reason you get bad_alloc with vector is because vector allocates much larger chunks. And even with overcommit, the system will refuse to allocate memory if the chunk is too big. Also, many mallocs will use a different allocation strategy for very large chunks; IIRC, the malloc in Linux switches to using mmap beyond a certain size, and the system may refuse a mmap even when an sbrk would have succeeded.

James Kanze
  • 150,581
  • 18
  • 184
  • 329
  • which is essentially the answer I was giving but wanted to fill in the detail – CashCow Oct 02 '14 at 15:47
  • and by the way you have not given him a solution – CashCow Oct 02 '14 at 16:11
  • @CashCow Well, the real solution is not to use systems which are broken:-). If I needed that sort of robustness, I'd not use Linux. – James Kanze Oct 02 '14 at 16:42
  • 1
    Well sometimes our hands are tied. You know we work for companies or clients and can't always make these decision... You know the more and more memory you pump in a system the more users want to use it to cache everything. And then of course if it crashes there is no backup. – CashCow Oct 02 '14 at 16:46
  • @CashCow I know. I have to work with Windows at present. But since Linux is not the only system which over-commits. At least there is a way of configuring it not to (assuming you have root privileges); for the others, I don't know. (I don't know what the status is today, but when I first tested it, Windows would pop up a window, and wait for user intervention. Very useful on a server without anyone in front of the screen.) – James Kanze Oct 02 '14 at 16:49
2

The fast answer of why vector might crash and not deque is that because vector uses a contiguous buffer you'll bad_alloc "quicker". And also on a request that asks for a large chunk.

Why? Because it is less likely that you will be able to allocate a contiguous buffer than a smaller one.

vector will allocate a certain amount and then try a big "realloc" for a bigger buffer. It might be possible to extend the current memory space but it might not, and may need to find a whole new chunk of memory.

Let's say it looks to expand by a factor of 1.5. So you currently have 40% of the memory available in your vector in use and it needs to find 60% of the memory available but cannot do it at the current location. Well that takes you to the limit so it fails with bad_alloc but in reality you are only using 40% of the memory.

So in reality there is memory available and those operating systems that use "optimistic" memory allocation will not accidentally over-allocate for you. You've asked for a lot and it couldn't give it to you. (They are not always totally optimistic).

deque on the other hand asks for a chunk at a time. You will really use up your memory and as a result it's better to use for large collections, however it has the downside that when you run out of memory you really do run out. And your lovely optimistic memory allocator cannot handle it and your process dies. (It kills something to make more memory. Sadly it was yours).


Now for your solution of how to avoid it happening? Your answer might be a custom allocator, i.e. the 2nd parameter of deque, which could check the real system memory available and refuse to allocate if you have hit a certain threshold.

Of course it is system dependent but you could have different versions for different machines.

You could also set your own arbitrary "limit", of course.

Assuming your system is Linux, you might be able to turn overcommit off with

'echo 2 > /proc/sys/vm/overcommit_memory'

You would need root (admin) permissions to do that. (Or get someone who has it to configure it that way).

Otherwise, other ways to examine the memory usage are available in the Linux manuals, usually referred to in /proc.

If your system isn't Linux but another that over-commits, you'll have to look up how you can by-pass it by writing your own memory manager. Otherwise take the simpler option of an arbitrary configurable maximum size.

Remember that with deque your allocator will only be invoked when you need to allocate a new "chunk" and not for every push_back.

CashCow
  • 30,981
  • 5
  • 61
  • 92
0

Xarylem, just attempting to answer "how can I prevent this" here...

you know something that throws bad_alloc - std::vector. you know something that crashes... std::deque.

So one way would be to create a new vector of size X, if that succeeds, clear the vector and push back X more into the deque. If it doesn't, you know you're walking into a quagmire. Something like:

std::vector<int> testVector;
testeVector.reserve(1);
std::deque<int> actualDequeToFill;
for(size_t i = 0; ; ++i)
{
      //test first
      bool haveSpace = false;
      try { testVector.reserve(2); } catch(...) { haveSpace = false; }
      vector.reserve(1);
      if (!haveSpace) throw new std::bad_alloc("Vector shows no space left");
      deque.push_back(something);
}

This isn't anywhere close to foolproof... so please use it as a possible idea for a workaround rather than as an implementation.

Now that that is aside, my best guess would be... your compiler is not compliant ... as I've mentioned in a comment, C++ requires deque::push_back to throw bad_alloc. If you can, move away from that compiler (this is basic stuff to get right)

Fox
  • 2,078
  • 17
  • 18
  • In the end, the compiler depends on the system; if the system says that the memory is available, then crashes when you use it, there's not much the compiler can do about it. One could argue that a conforming implementation isn't possible on such systems (and I'd agree with that), but you may not always have a choice. – James Kanze Oct 02 '14 at 16:51
  • 1
    That won't work it isn't the fact it's vector or deque, it's the fact that vector requests a big blob of data which the heap manager rejects, deque a smaller amount which it accepts. – CashCow Oct 02 '14 at 16:53
  • @CashCow which is another way of stating what I said - that it is implementation dependant when it ought not to be. Both std::vector::push_back and std::deque::push_back **should** throw bad_alloc - this is **required** by the C++ standard....but OP's deque doesnt. May be that's because of your chunks theory, may be not. Either way his compiler (and/or platform) is to blame. So how do we work around this? – Fox Oct 03 '14 at 07:01
  • And resizing using chunks isn't a requirement, it is just a popular implementation method - see http://stackoverflow.com/questions/5410035/when-does-a-stdvector-reallocate-its-memory-array – Fox Oct 03 '14 at 07:09
  • @JamesKanze - I agree - it is difficult for a compiler writer to write a fully compliant model for such systems. The only point I make is... somehow the compiler takes different approaches for the vector and the deque... and we can (try to) leverage that for a workaround. – Fox Oct 03 '14 at 07:19
  • @Fox The library implementation _must_ take different approaches for vector and deque:-). Ideally, the response to systems which don't allow compliance is to not use them, but not everyone has that choice. (At least with Linux, if you have root privileges, you can configure it so that it allows compliance. At least for this issue.) – James Kanze Oct 03 '14 at 09:37