1

I'd like to run some code in a jupyter notebook, but the kernel keeps crashing. That's not uncommon and usually it's one of the following:

  • a resource problem, for example an out-of-memory error. But the same code runs just fine in the terminal.
  • an issue with dependencies and versioning. So I've set up a clean new conda environment, installed only the packages I need plus jupyterlab. Again, if I execute the code in this new environment in the terminal it works, but the jupyter kernel always crashes.
  • it's something deeper that makes this code incompatible with a jupyter notebook. Asynchronous code with event loops for example can be tricky. This is harder to test. As a simple check I uploaded the jupyter notebook to google colab. There it runs without any problems.

So here's my question: How can I debug the kernel crash? Can I enable more verbose logging? I've looked at the terminal from which I'm starting the jupyter notebook (by the way, the problem is the same for notebook and lab). There's a warning message. But I think this is a warning because of the dead kernel, not the reason for the kernel dying.

[I 2022-10-21 14:51:48.529 ServerApp] Kernel started: e349c5e3-4093-4819-a875-e3db160ab9a2
[IPKernelApp] WARNING | WARNING: attempted to send message from fork
{'header': {'msg_id': '15084d90-ffcf60cac3fd86458a3a278b_24082_66', 'msg_type': 'input_request', 'username': 'lhk', 'session': '15084d90-ffcf60cac3fd86458a3a278b', 'date': datetime.datetime(2022, 10, 21, 12, 52, 2, 178626, tzinfo=datetime.timezone.utc), 'version': '5.3'}, 'msg_id': '15084d90-ffcf60cac3fd86458a3a278b_24082_66', 'msg_type': 'input_request', 'parent_header': {'date': datetime.datetime(2022, 10, 21, 12, 51, 51, 700000, tzinfo=tzutc()), 'msg_id': 'abf35780-210e-4864-b9a0-739e6daf3dcb', 'msg_type': 'execute_request', 'session': '6a491242-4ae1-4388-b493-b4f0decd0dd4', 'username': '', 'version': '5.2'}, 'content': {'prompt': '', 'password': False}, 'metadata': {}}
[I 2022-10-21 14:52:03.529 ServerApp] AsyncIOLoopKernelRestarter: restarting kernel (1/5), new random ports

For reference, I'm trying to execute the following jupyter notebook: https://colab.research.google.com/drive/11QKlbrvOrxg4lJADAKUPqEyvoWTtiVl7?usp=sharing

It should be equivalent to executing this code (2nd example given in a comment at the very top): https://huggingface.co/spaces/codeparrot/apps_metric/blob/main/example_script.py

I've searched for similar questions. There are quite a few that seem related. But usually the answer is to check the output from the jupyter process (which isn't enough in my case) or to run the script on its own, outside of jupyter (which doesn't produce the error).

lhk
  • 27,458
  • 30
  • 122
  • 201
  • As far as I know, there is no extensive debugging on the terminal. After all Jupyter is another process itself. You can probably try something like Dtrace if you are very serious about this. But my suggestion is when the project gets bigger and bigger move out from notebook to its own script. Sorry it's not a complete answer. But do check ```DTRace``` or ( ```Strace``` if you are in linux. ) – spramuditha Oct 21 '22 at 13:29

1 Answers1

0

A possible reason for a jupyter kernel dying/crashing/restarting can be some bug in a library. However, even with jupyter notebook --debug you won't get much information. You have to run the code in python to get a more informative output.

First save your notebook as pure python code (File -> Download as -> Python (.py)) and run you code in python.

python your_program.py

In my case I got the following output indicating a segmentation fault in some library.

Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)

In this case you can use gdb to further track down the problem as described in this answer.

gdb --args python

Run program in gdb

run your_program.py
Nik
  • 26
  • 3