When using PySpark I'd like a SparkContext to be initialised (in yarn client mode) upon creation of a new notebook.
The following tutorials describe how to do this in past versions of ipython/jupyter < 4
https://www.dataquest.io/blog/pyspark-installation-guide/
https://npatta01.github.io/2015/07/22/setting_up_pyspark/
I'm not quite sure how to achieve the same with notebook > 4 as noted in http://jupyter.readthedocs.io/en/latest/migrating.html#since-jupyter-does-not-have-profiles-how-do-i-customize-it
I can manually create and configure a Sparkcontext but I don't want our analysts to have to worry about this.
Does anyone have any ideas?