3

I've got (multiple) functions which I need to call a large number of times with essentially random arguments, and I need to create a log of what is returned each time, and with what arguments. Usually the function returns something or raises an error, in which case I can handle it fine.

However, I've now found some arguments that cause the function to cause a core dump, which also kills my script. What I would prefer is to raise an exception, which could then get handled by my existing exception handling code. Then it would get recorded as normal, and continue testing other inputs. Is there a way to do this?

  • What do you mean by coredump? Are you segfaulting the interperter? Anyway, that obviously can't be recovered from within the Python process. – Antimony Aug 22 '13 at 03:33
  • 1
    The error I'm getting is python: `mod.cu:864: void NVMatrix::_aggregate(int, NVMatrix&, Agg, BinaryOp) [with Agg = NVMatrixAggs::Sum, BinaryOp = NVMatrixBinaryOps::SecondScaled, NVMatrix = NVMatrix]: Assertion 'numBlocks < 65535' failed. Aborted (core dumped)`. I'm trying to come up with a way around this with multiprocessing, but I'm not sure if it will work. – Malcolm Gooding Aug 26 '13 at 20:07
  • It is a subprocess that python has bindings to that segfaults. See how [gberseth](https://www.cs.ubc.ca/~gberseth/blog/handling-segfaults-in-python-that-occur-in-custom-c-libraries.html) handles this by registering a _sig_handler_ or integrating the segfault as a python exception. – c00kiemon5ter Mar 25 '18 at 18:15
  • Possible duplicate of https://stackoverflow.com/questions/27950296/using-try-to-avoiding-a-segmentation-fault – John C Sep 03 '21 at 15:41

0 Answers0