2

My C++ program (running on Mac OS) got killed. Upon running with a debugger, I obtain the following:

Process 90937 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGKILL
frame #0: 0x000000010001fc50 captureISO`tbb::interface9::internal::start_for<WDutils::Parallel::details::blocked_range_terminating<unsigned long>, (anonymous namespace)::simulations::sampleSome(bool)::$_2, tbb::auto_partitioner const>::run_body(WDutils::Parallel::details::blocked_range_terminating<unsigned long>&) [inlined] tbb::concurrent_vector<(anonymous namespace)::simulations::initialCondition, tbb::cache_aligned_allocator<(anonymous namespace)::simulations::initialCondition> >::push_back(this=0x00007fff5fbfebd0)::simulations::initialCondition const&) at concurrent_vector.h:846 [opt]
   843         iterator push_back( const_reference item )
   844         {
   845             push_back_helper prolog(*this);
-> 846             new(prolog.internal_push_back_result()) T(item);
   847             return prolog.return_iterator_and_dismiss();
   848         }
   849

So, it appears that the kill signal was issued by allocation within tbb::concurrent_vector<>. However, neither the documentation for tbb::concurrent_vector::push_back(), nor that for operator new suggest such action.

Is this an undocumented behaviour of TBB (prefer to kill the running process, because TBB cannot deal appropriately with exceptions)? How can I find out and how can I avoid this?

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Walter
  • 44,150
  • 20
  • 113
  • 196

1 Answers1

2

You probably need to optimize your algorithm to use less memory.

I don't know how macOS handles memory allocation, but on Linux, if you use too much memory, "OOM Killer" will send the SIGKILL signal.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
  • 1
    @Walter Hopefully you don’t have to rethink the memory allocation entirely and instead there’s an easier way to catch the failing new or handle / free memory before the OS kills the app for what it detects as a runaway allocation pattern (if that’s what’s actually happening) - https://stackoverflow.com/a/6834029/475228 – bmike Jul 20 '19 at 16:52
  • I do understand now that memory allocation was a problem. However, shouldn't this result in operator `new` throwing an exception rather than the run-time system kill the process? – Walter Jul 20 '19 at 19:36
  • 1
    @Walter not necessary, that's what "overcommit" is about. Basically on most linux you can allocate 4TB of memory without any problem, because the kernel does not immediately provide physical storage to back that virtual memory. It basically "hopes" what you are not going to access most of the memory you allocated. The behavior on Macos does not seem to be documented, but [this](https://serverfault.com/questions/852059/does-darwin-macos-kernel-do-memory-overcommit) post suggests that macos does indeed overcommit. – sbabbi Jul 20 '19 at 21:30
  • That'll be it. Integrate that crucial detail into the answer for an upvote (it currently doesn't fully answer the question) – Lightness Races in Orbit Jul 20 '19 at 21:40