20

When running the following in a Python 3.5 Jupyter environment I get the error below. Any ideas on what is causing it?

import findspark
findspark.init()

Error:

IndexError                                Traceback (most recent call
last) <ipython-input-20-2ad2c7679ebc> in <module>()
      1 import findspark
----> 2 findspark.init()
      3 
      4 import pyspark

/.../anaconda/envs/pyspark/lib/python3.5/site-packages/findspark.py in init(spark_home, python_path, edit_rc, edit_profile)
    132     # add pyspark to sys.path
    133     spark_python = os.path.join(spark_home, 'python')
--> 134     py4j = glob(os.path.join(spark_python, 'lib', 'py4j-*.zip'))[0]
    135     sys.path[:0] = [spark_python, py4j]
    136 

IndexError: list index out of range
Shaido
  • 27,497
  • 23
  • 70
  • 73
tjb305
  • 2,580
  • 4
  • 15
  • 20

4 Answers4

21

This is most likely due to the SPARK_HOME environment variable not being set correctly on your system. Alternatively, you can just specify it when you're initialising findspark, like so:

import findspark
findspark.init('/path/to/spark/home')

After that, it should all work!

gregoltsov
  • 2,269
  • 1
  • 22
  • 37
9

I was getting the same error and was able to make it work by entering the exact installation directory:

import findspark
# Use this
findspark.init("C:\Users\PolestarEmployee\spark-1.6.3-bin-hadoop2.6")
# Test
from pyspark import SparkContext, SparkConf

Basically, it is the directory where spark was extracted. In future where ever you see spark_home enter the same installation directory. I also tried using toree to create a kernal instead, but it is failing somehow. A kernal would be a cleaner solution.

ug2409
  • 344
  • 1
  • 6
  • For me, I had to use "/" instead of "\" to make it work, i.e. findspark.init("C:/Users/....."). Not sure why tho... – Molly Zhou Aug 23 '21 at 07:34
3

You need to update the SPARK_HOME variable inside bash_profile. For me, the following command worked(in terminal):

export SPARK_HOME="/usr/local/Cellar/apache-spark/2.2.0/libexec/"

After this, you can use follow these commands:

import findspark
findspark.init('/usr/local/Cellar/apache-spark/2.2.0/libexec')
Anurag Sharma
  • 76
  • 1
  • 5
  • From the other solutions, this solution appears to be redundant. You shouldn't have to specify the path in both places. – D. Ror. Sep 20 '19 at 20:35
  • I would have thought that, too. However, in my environment, I still have to specify the path when calling findspark.init(). – val_to_many Jan 07 '20 at 10:53
0

maybe this could help:

i found that findspark.init() tries to find data in .\spark-3.0.1-bin-hadoop2.7\bin\python\lib, but the python folder was outside the bin folder. i simply ran findspark.init('.\spark-3.0.1-bin-hadoop2.7'), without the '\bin' folder

nir
  • 109
  • 1
  • 5