I have a large dynamically allocated array (C++, MSVC110), and I am initializing it like this:
try {
size_t arrayLength = 1 << 28;
data = new int[arrayLength];
for (size_t i = 0; i < arrayLength; ++i) {
data[i] = rand();
}
}
catch (std::bad_alloc&) { /* Report error. */ }
Everything was fine before I tried to allocate more than system's actual RAM, like 10 GB. I was expecting to catch a bad_alloc
exception, but the system (Win7) started to swap as crazy, etc.. you know what I am talking about.
Then I examined the situation in my task manager and noticed interesting thing, in debug mode the allocation was instant, but in release, it was gradual.
Debug mode:
Release mode:
What is causing it? Could this have any negative impact on performance? Have I done something wrong? Is OS causing this? Or C++ allocator?
I would actually prefer to get an exception if there is not enough memory rather than go to endless swapping loop. Is there any way how to achieve it in C++?
I know that one solution might be turn off swapping in Windows, but that would solve the problem only for me.