57

I have been running a particular python script for some time. All of the script had been running perfectly fine (including in Jupyter) for many months before this. Now, somehow, the jupyter in my system has started showing the following error message at one particular line of the code (the last line of the below mentioned code). All parts of the code run fine, except for the last line of the code (where I call a user defined function to do pair counts). The user defined function (correlation.polepy) can be found from https://github.com/OMGitsHongyu/N-body-analysis

This is the error message that I am getting:

Kernel Restarting
The kernel appears to have died. It will restart automatically.

And, here is the skeleton of my Python Code:

from __future__ import division
import numpy as np
import correlation
from scipy.spatial import cKDTree

File1 = np.loadtxt('/Users/Research/fname1.txt')
File2 = np.loadtxt('/Users/Research/fname2.txt')

masscut = 1.1*np.power(10,13)
mark1 = (np.where(File1[:,0]>masscut))[0]
mark2 = (np.where(File2[:,0]>masscut))[0]

Data1 = File1[mark1,1:8]
Data2 = File2[mark2,1:8]

Xi_masscut = correlation.polepy(p1=Data1, p2=Data2, rlim=150, nbins=150, nhocells=100, blen=1024, dis_f=100)

Similar problem happens (last line of the code) when I try to use IPython. When I try to use Python (implement in terminal), I get an error message (at the last line) which says "Segmentation fault: 11". I am using Python 2.7.13 :: Anaconda 2.5.0 (x86_64).

I have tried the following methods already in search for a solution:

1.> I checked some of the previous links on stackoverflow where this problem has been asked: The kernel appears to have died. It will restart automatically

I tried the solution given in the link above; sadly it doesn't seem to work for my case. This is the solution that was mention in the link given above:

conda update mkl

2.> Just to check if the system is running out of memory, I closed all applications which are heavy on memory. My system has 16 GB physical memory and even when there is over 9 GB of free memory, this problem happens (again, this problem had not been happening before, even when I had been using 14 GB in other tasks and had less than 2 GB of memory. It's very surprising that I could run task with given inputs before and I am not able to replicate calculation with the same exact inputs now.)

3.> I saw another link: https://alpine.atlassian.net/wiki/plugins/servlet/mobile?contentId=134545485#content/view/134545485

This one appears to tackle similar problems and it speaks about there not being enough memory for the docker container. I had doubts about how to implement the suggestions mentioned in there.

How do I solve this problem?

Commoner
  • 1,678
  • 3
  • 19
  • 34
  • Problema: Jupyter the kernel appears to have died it will restart automatically I had the same problem, reinstalled numpy and keras, but to no avail, it seems to be a problem only with the cuda incompatible with mac OS 10.13.6 or higher. When I used the IDE spider the problem disappeared. – Emerson Moreira Jun 06 '20 at 18:41
  • I had the same error message building a tensorflow.keras model. I then loaded the exact same code from a file into a Python shell (instead of a Jupyter notebook). This time I got a much more detailed error message that a certain cuDNN DLL was missing from the bin directory. The problem was fixed once I put that DLL in the right directory. – garbo999 Apr 30 '22 at 11:46
  • if you encountered this problem when using pandas, check out [this answer](https://stackoverflow.com/a/74091615/19123103). – cottontail Nov 16 '22 at 21:32

12 Answers12

22

This issue happens when I import sklearn PCA before numpy (not sure reverse the sequence will solve the problem)

But later I solved the issue by reinstalling numpy and mkl: conda install numpy and conda install -c intel mkl

Leon
  • 320
  • 1
  • 4
8

I tried conda install tensorflow which solved my problem.

double-beep
  • 5,031
  • 17
  • 33
  • 41
Rohith V
  • 1,089
  • 1
  • 9
  • 23
4

install library with conda instead pip this work for me

arslan
  • 1,064
  • 10
  • 19
3

When this happened to me, I just uploaded my notebook to google colab and it started working. It seems, though, that the issue is a bottleneck in compute/memory resources in training these big models, and places like colab have a lot more bandwidth than does your machine.

crytting
  • 181
  • 1
  • 9
3

For macOS Versions of 12.0 and above, Tensorflow GPU isn't supported. So try this piece of code, it worked for me -

import os 
os.environ['KMP_DUPLICATE_LIB_OK']='True'
2

Reinstall your library with conda instead of pip.

Dipesh
  • 49
  • 3
1

Using the command:

conda install -c anaconda keras

worked for me.

Ethan
  • 876
  • 8
  • 18
  • 34
1
pip uninstall mpi4py

works for me

William
  • 3,724
  • 9
  • 43
  • 76
1

In my case the GPU was running out of memory. Try with smaller models.

viktor_vangel
  • 894
  • 9
  • 16
0

In my case, the error was caused due to an issue with hdf5(Version mismatch) library which suggests that whenver kernel dies unexpectedly any library during imports can also cause the issue.

In such cases, it would be best to first check the corresponding command prompt window which was used to trigger the jupyter notebook. It provides logs of such errors and can be used to troubleshoot such issues.

Issue caused due to: import tensorflow
Message: Version mismatch of hdf5 library
Resolution: Set environment variable 'HDF5_DISABLE_VERSION_CHECK' = 2

AbhiGupta
  • 474
  • 1
  • 6
  • 14
0

I had this problem when I called fillna() to replace numpy.nans in a pandas Series by category dtype Series.

The following is a minimal example that reproduces the issue.

import pandas as pd
import numpy as np

s = pd.Series([np.nan, 1])
v = pd.Series([3, 4], dtype='category')

s.fillna(v)

For me, the solution was that I simply had to cast v into a numeric dtype because it shouldn't've been category dtype in the first place (v was constructed using categories but that's a different story). So something like

s.fillna(v.astype(float))

solved the issue.

Also it should be noted that this problem doesn't occur if s has pd.NA instead of np.nan; however, float('nan') still produces it.

Even in the OP, I suspect how np.nan is handled is causing this issue. For me, it turned out to be a minor issue but the fact that the kernel just restarts silently without any stack trace is very frustrating.

N.B. I have numpy 1.21.5 and pandas 1.4.2 on python 3.9.

cottontail
  • 10,268
  • 18
  • 50
  • 51
0

For me the problem was that I ran out of RAM.

larsaars
  • 2,065
  • 3
  • 21
  • 32