2

What is pyspark actually doing except importing packages properly? Is it possible to use a regular jupyter notebook and then import what is needed?

zero323
  • 322,348
  • 103
  • 959
  • 935
Cedric H.
  • 7,980
  • 10
  • 55
  • 82

3 Answers3

3

Yes, it is possible but can be painful. While Python alone is not an issue and all you need is to set $SPARK_HOME, add $SPARK_HOME/python (and if not accessible otherwise $SPARK_HOME/python/lib/py4j-[VERSION]-src.zip) PySpark script handles JVM setup as well (--packages, --jars --conf, etc.).

This can be handled using PYSPARK_SUBMIT_ARGS variable or using $SPARK_HOME/conf (see for example How to load jar dependenices in IPython Notebook).

There is an old blog post from Cloudera which describes example configuration and, as far a I remember, still works.

Community
  • 1
  • 1
zero323
  • 322,348
  • 103
  • 959
  • 935
  • How about if i want to send my own `py` library that has several of classes, etc? – Alberto Bonsanto Feb 28 '16 at 18:14
  • @AlbertoBonsanto As usual. Either install on workers or send using PyFiles if possible. Do you experience any problems with that? – zero323 Feb 28 '16 at 18:21
  • The documentation is pretty ambiguous, and I was trying to do that without any positive result in `Python` nor in `Scala`, I have to try it once again. – Alberto Bonsanto Feb 28 '16 at 18:22
2

You could consider creating a pySpark kernel for Jupyter - it would import pyspark packages for you.

Create file (need to create directory first; for older versions it might be located somewhere else):

~/.local/share/jupyter/kernels/pyspark/kernel.json

with the following content:

{
 "display_name": "pySpark (Spark 1.6.0)",
 "language": "python",
 "argv": [
  "/usr/bin/python2",
  "-m",
  "IPython.kernel",
  "-f",
  "{connection_file}"
 ],
 "env": {
  "SPARK_HOME": "/usr/local/lib/spark-1.6.0-bin-hadoop2.6",
  "PYTHONPATH": "/usr/local/lib/spark-1.6.0-bin-hadoop2.6/python/:/usr/local/lib/spark-1.6.0-bin-hadoop2.6/python/lib/py4j-0.9-src.zip",
  "PYTHONSTARTUP": "/usr/local/lib/spark-1.6.0-bin-hadoop2.6/python/pyspark/shell.py",
  "PYSPARK_SUBMIT_ARGS": "pyspark-shell"
 }
}

Change the Spark's paths appropriately.

2

Assuming you haven't already created the context, what I like to is set the submit args using PYSPARK_SUBMIT_ARGS:

import os

os.environ['PYSPARK_SUBMIT_ARGS'] = '--driver-memory 15g --packages com.databricks:spark-csv_2.10:1.3.0 pyspark-shell'
Kamil Sindi
  • 21,782
  • 19
  • 96
  • 120