26

I am using the Jupyter notebook with Pyspark with the following docker image: Jupyter all-spark-notebook

Now I would like to write a pyspark streaming application which consumes messages from Kafka. In the Spark-Kafka Integration guide they describe how to deploy such an application using spark-submit (it requires linking an external jar - explanation is in 3. Deploying). But since I am using Jupyter notebook I never actually run the spark-submit command, I assume it gets run in the back if I press execute.

In the spark-submit command you can specify some parameters, one of them is -jars, but it is not clear to me how I can set this parameter from the notebook (or externally via environment variables?). I am assuming I can link this external jar dynamically via the SparkConf or the SparkContext object. Has anyone experience on how to perform the linking properly from the notebook?

DDW
  • 1,975
  • 2
  • 13
  • 26

5 Answers5

23

I've managed to get it working from within the jupyter notebook which is running form the all-spark container.

I start a python3 notebook in jupyterhub and overwrite the PYSPARK_SUBMIT_ARGS flag as shown below. The Kafka consumer library was downloaded from the maven repository and put in my home directory /home/jovyan:

import os
os.environ['PYSPARK_SUBMIT_ARGS'] = 
  '--jars /home/jovyan/spark-streaming-kafka-assembly_2.10-1.6.1.jar pyspark-shell'

import pyspark
from pyspark.streaming.kafka import KafkaUtils
from pyspark.streaming import StreamingContext

sc = pyspark.SparkContext()
ssc = StreamingContext(sc,1)

broker = "<my_broker_ip>"
directKafkaStream = KafkaUtils.createDirectStream(ssc, ["test1"],
                        {"metadata.broker.list": broker})
directKafkaStream.pprint()
ssc.start()

Note: Don't forget the pyspark-shell in the environment variables!

Extension: If you want to include code from spark-packages you can use the --packages flag instead. An example on how to do this in the all-spark-notebook can be found here

DDW
  • 1,975
  • 2
  • 13
  • 26
  • 1
    Thanks. Just want to say that `broker` should be of format like: `"localhost:9092"`. – kww Apr 16 '18 at 14:11
  • Were you ever able to do the same thing without downloading the jar and using `--packages` option (mentioned here: https://spark.apache.org/docs/latest/submitting-applications.html) instead? – Michele Piccolini Dec 11 '19 at 15:50
  • I am surprised that this actually worked for you. I have to setup PYSPARK_SUBMIT_ARGS in Dockerfile before container start. – FrankZhu Dec 17 '20 at 15:03
9

Indeed, there is a way to link it dynamically via the SparkConf object when you create the SparkSession, as explained in this answer:

spark = SparkSession \
    .builder \
    .appName("My App") \
    .config("spark.jars", "/path/to/jar.jar,/path/to/another/jar.jar") \
    .getOrCreate()
Nandan Rao
  • 333
  • 2
  • 10
1

You can run your jupyter notebook with the pyspark command by setting the relevant environment variables:

export PYSPARK_DRIVER_PYTHON=jupyter
export IPYTHON=1
export PYSPARK_DRIVER_PYTHON_OPTS="notebook --port=XXX --ip=YYY"

with XXX being the port you want to use to access the notebook and YYY being the ip address.

Now simply run pyspark and add --jars as a switch the same as you would spark submit

Assaf Mendelson
  • 12,701
  • 5
  • 47
  • 56
  • 2
    That's interesting. Docker can set environment variables with `docker run -e `, but they can also get clobbered somewhere. The Dockerfile for all-spark-notebook uses env `SPARK_OPTS` but I have noticed that all-spark-notebook Toree (scala) was clobbering a `--driver-memory` setting as well as `--master` and using `local[2]` in a particular kernel.json file. See, e.g., my post about some manual testing in https://github.com/jupyter/docker-stacks/pull/144 . – Paul Mar 13 '16 at 20:24
1

In case someone is the same as me: I tried all above solutions and none of them works for me. What I'm trying to do is to use Delta Lake in the Jupyter notebook.

Finally I can use from delta.tables import * by calling SparkContext.addPyFile("/path/to/your/jar.jar") first. Though in the spark official docs, it only mentions adding .zip or .py file, but I tried .jar and it worked perfectly.

Dd__Mad
  • 116
  • 8
0

for working on jupyter-notebook with spark you need to give the location of the external jars before the creation of sparkContext object. pyspark --jars youJar will create a sparkcontext with location of external jars

prajwal
  • 69
  • 5