1

If I restart my jupyter kernel will any existing LocalCluster shutdown or will the dask worker processes keep running?

I know when I used a SLURM Cluster the processes keep running if I restart my kernel without calling cluster.close() and I have to use squeue to see them and scancel to cancel them.

For local processes however how can I tell that all the worker processes are gone after I have restarted my kernel. If they do not disappear automatically how can I manually shut them down if I no longer have access to cluster (the kernel restarted)

I try to remember to call cluster.close but I often forget. Using a context manager doesn't work for my jupyter needs.

HashBr0wn
  • 387
  • 1
  • 11
  • the answer could be affected by the tasks you've scheduled (e.g. tasks called with [`fire_and_forget`](https://docs.dask.org/en/stable/futures.html#fire-and-forget) will keep running) and maybe your settings (not sure), but generally, yes it will shut down when the cluster falls out of context. – Michael Delgado May 10 '22 at 01:52
  • I am not using `fire_and_forget` do you know of a way to check for running dask workers in a terminal? Or with a python command even if you don't have a cluster? – HashBr0wn May 10 '22 at 13:20

1 Answers1

1

During the normal termination of your kernel python process, all objects will be finalised. For the cluster object, this includes calling close() automatically, and you don't normally need to worry about it.

It is perhaps possible that close does not have a chance to run, in the case that the kernel is more forcibly killed as opposed to a normal termination. Since all LocalCluster processes are children of the kernel that started then, this will still result in the cluster stopping, but perhaps with some warnings about connections that didn't have time to clean themselves up. You should be able to ignore such warnings.

mdurant
  • 27,272
  • 5
  • 45
  • 74