13

I was looking at Apache Toree to use as Pyspark kernel for Jupyter

https://github.com/apache/incubator-toree

However it was using older version of Spark (1.5.1 vs current 1.6.0). I tried to use this method here http://arnesund.com/2015/09/21/spark-cluster-on-openstack-with-multi-user-jupyter-notebook/ by creating kernel.js

{
 "display_name": "PySpark",
 "language": "python",
 "argv": [
  "/usr/bin/python",
  "-m",
  "ipykernel",
  "-f",
  "{connection_file}"
 ],
 "env": {
  "SPARK_HOME": "/usr/local/Cellar/apache-spark/1.6.0/libexec",
  "PYTHONPATH": "/usr/local/Cellar/apache-spark/1.6.0/libexec/python/:/usr/local/Cellar/apache-spark/1.6.0/libexec/python/lib/py4j-0.9-src.zip",
  "PYTHONSTARTUP": "/usr/local/Cellar/apache-spark/1.6.0/libexec/python/pyspark/shell.py",
  "PYSPARK_SUBMIT_ARGS": "--master local[*] pyspark-shell"
 }
}

However, I got few problems:

  1. There is no /jupyter/kernels path in my Mac. So I ended up creating this path ~/.jupyter/kernels/pyspark. I am not sure if that is the correct path.

  2. Even after having all correct paths, I still don't see PySpark showing up as a kernel inside Jupyter.

What did I miss?

HP.
  • 19,226
  • 53
  • 154
  • 253

2 Answers2

22

launch jupyter notebook with python kernel and then run the following commands to initialize pyspark within Jupyter.

import findspark
findspark.init()

import pyspark
sc = pyspark.SparkContext()

FYI: have tried most of the configs to launch Apache Toree with pyspark kernel in Jupyter without success,

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Neil Kumar
  • 221
  • 1
  • 3
  • The findspark library depends on the SPARK_HOME environment variable -- I hadn't set that up yet, so I had to do (in the terminal) `export SPARK_HOME=/usr/lib/spark` – user941238 Apr 14 '16 at 22:27
  • 1
    Important note - if you're trying to connect to a cluster (not run locally) make sure to include the following arguments: sc = pyspark.SparkContext(appName=app_name, master=spark://IP:PORT) It took me a lot of googling to figure out why there weren't any applications showing up on my cluster dashboard – flyingmeatball Dec 05 '16 at 19:57
  • 1
    if your jupyter kernel is configured correctly for pyspark, the spark context will be defined for you. But if your kernel is configured correctly, you don't need `findspark` :-) – michael Jul 29 '17 at 21:52
1

Jupyter kernels should go in $JUPYTER_DATA_DIR. On OSX, this is ~/Library/Jupyter. See: http://jupyter.readthedocs.org/en/latest/system.html

Spencer
  • 860
  • 8
  • 16
  • It's weird that Jupyter decided to put a folder somewhere else outside of my current notebook repo which I built. – HP. Jan 26 '16 at 23:34
  • Dead `404` link, also, possibly out of date info. Currently, see http://jupyter.readthedocs.io/en/latest/projects/jupyter-directories.html , which suggests it could be now `$JUPYTER_PATH`, though the aforementioned "data dir" env var may still be respected. To query where that directory is on your system, run `jupyter --data-dir` – michael Jul 29 '17 at 21:51