2

I would like to run pySpark from Jupyter notebook. I downloaded and installed Anaconda which had Juptyer. I created the following lines

 from pyspark import SparkConf, SparkContext
 conf = SparkConf().setMaster("local").setAppName("My App")
 sc = SparkContext(conf = conf)

I get the following error

ImportError                               Traceback (most recent call last)
<ipython-input-3-98c83f0bd5ff> in <module>()
  ----> 1 from pyspark import SparkConf, SparkContext
  2 conf = SparkConf().setMaster("local").setAppName("My App")
  3 sc = SparkContext(conf = conf)

 C:\software\spark\spark-1.6.2-bin-hadoop2.6\python\pyspark\__init__.py in   <module>()
 39 
 40 from pyspark.conf import SparkConf
  ---> 41 from pyspark.context import SparkContext
 42 from pyspark.rdd import RDD
 43 from pyspark.files import SparkFiles

 C:\software\spark\spark-1.6.2-bin-hadoop2.6\python\pyspark\context.py in <module>()
 26 from tempfile import NamedTemporaryFile
 27 
 ---> 28 from pyspark import accumulators
 29 from pyspark.accumulators import Accumulator
 30 from pyspark.broadcast import Broadcast

 ImportError: cannot import name accumulators

I tried adding the following environment variable PYTHONPATH which points to the spark/python directory, based on an answer in Stackoverflow importing pyspark in python shell

but this was of no help

Community
  • 1
  • 1
Tinniam V. Ganesh
  • 1,979
  • 6
  • 26
  • 51

3 Answers3

8

This worked for me:

import os
import sys

spark_path = "D:\spark"

os.environ['SPARK_HOME'] = spark_path
os.environ['HADOOP_HOME'] = spark_path

sys.path.append(spark_path + "/bin")
sys.path.append(spark_path + "/python")
sys.path.append(spark_path + "/python/pyspark/")
sys.path.append(spark_path + "/python/lib")
sys.path.append(spark_path + "/python/lib/pyspark.zip")
sys.path.append(spark_path + "/python/lib/py4j-0.9-src.zip")

from pyspark import SparkContext
from pyspark import SparkConf

sc = SparkContext("local", "test")

To verify:

In [2]: sc
Out[2]: <pyspark.context.SparkContext at 0x707ccf8>
James Ma
  • 81
  • 1
  • 2
  • Nope. I get the following error ImportError ---> 41 from pyspark.context import SparkContext 42 from pyspark.rdd import RDD 43 from pyspark.files import SparkFiles C:\software\spark\spark-1.6.2-bin-hadoop2.6\python\pyspark\context.py in () 26 from tempfile import NamedTemporaryFile 27 ---> 28 from pyspark import accumulators 29 from pyspark.accumulators import Accumulator 30 from pyspark.broadcast import Broadcast ImportError: cannot import name accumulators – Tinniam V. Ganesh Jul 16 '16 at 07:41
2

2018 version

INSTALL PYSPARK on Windows 10 JUPYTER-NOTEBOOK With ANACONDA NAVIGATOR

STEP 1

Download Packages

1) spark-2.2.0-bin-hadoop2.7.tgz Download

2) java jdk 8 version Download

3) Anaconda v 5.2 Download

4) scala-2.12.6.msi Download

5) hadoop v2.7.1Download

STEP 2

MAKE SPARK FOLDER IN C:/ DRIVE AND PUT EVERYTHING INSIDE IT It will look like this

NOTE : DURING INSTALLATION OF SCALA GIVE PATH OF SCALA INSIDE SPARK FOLDER

STEP 3

NOW SET NEW WINDOWS ENVIRONMENT VARIABLES

  1. HADOOP_HOME=C:\spark\hadoop

  2. JAVA_HOME=C:\Program Files\Java\jdk1.8.0_151

  3. SCALA_HOME=C:\spark\scala\bin

  4. SPARK_HOME=C:\spark\spark\bin

  5. PYSPARK_PYTHON=C:\Users\user\Anaconda3\python.exe

  6. PYSPARK_DRIVER_PYTHON=C:\Users\user\Anaconda3\Scripts\jupyter.exe

  7. PYSPARK_DRIVER_PYTHON_OPTS=notebook

  8. NOW SELECT PATH OF SPARK :

    Click on Edit and add New

    Add "C:\spark\spark\bin” to variable “Path” Windows

STEP 4

  • Make folder where you want to store Jupyter-Notebook outputs and files
  • After that open Anaconda command prompt and cd Folder name
  • then enter Pyspark

thats it your browser will pop up with Juypter localhost

STEP 5

Check pyspark is working or not !

Type simple code and run it

from pyspark.sql import Row
a = Row(name = 'Vinay' , age=22 , height=165)
print("a: ",a)
Vinay Chaudhari
  • 128
  • 2
  • 9
0

Running pySpark in Jupyter notebooks - Windows

JAVA8 : https://www.guru99.com/install-java.html

Anakonda : https://www.anaconda.com/distribution/

Pyspark in jupyter : https://changhsinlee.com/install-pyspark-windows-jupyter/

import findspark

findspark.init()

from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *

spark = SparkSession.builder.appName('test').getOrCreate()
data = [(1, "siva", 100), (2, "siva2", 200),(3, "siva3", 300),(4, "siva4", 400),(5, "siva5", 500)]
schema = ['id', 'name', 'sallary']

df = spark.createDataFrame(data, schema=schema)
df.show()