I've written a python code on summing up all numbers in first-column for each csv file which is as follow:
import os, sys, inspect, csv
### Current directory path.
curr_dir = os.path.split(inspect.getfile(inspect.currentframe()))[0]
### Setup the environment variables
spark_home_dir = os.path.realpath(os.path.abspath(os.path.join(curr_dir, "../spark")))
python_dir = os.path.realpath(os.path.abspath(os.path.join(spark_home_dir, "./python")))
os.environ["SPARK_HOME"] = spark_home_dir
os.environ["PYTHONPATH"] = python_dir
### Setup pyspark directory path
pyspark_dir = python_dir
sys.path.append(pyspark_dir)
### Import the pyspark
from pyspark import SparkConf, SparkContext
### Specify the data file directory, and load the data files
data_path = os.path.realpath(os.path.abspath(os.path.join(curr_dir, "./test_dir")))
### myfunc is to add all numbers in the first column.
def myfunc(s):
total = 0
if s.endswith(".csv"):
cr = csv.reader(open(s,"rb"))
for row in cr:
total += int(row[0])
return total
def main():
### Initialize the SparkConf and SparkContext
conf = SparkConf().setAppName("ruofan").setMaster("spark://ec2-52-26-177-197.us-west-2.compute.amazonaws.com:7077")
sc = SparkContext(conf = conf)
datafile = sc.wholeTextFiles(data_path)
### Sent the application in each of the slave node
temp = datafile.map(lambda (path, content): myfunc(str(path).strip('file:')))
### Collect the result and print it out.
for x in temp.collect():
print x
if __name__ == "__main__":
main()
I would like to use Apache-Spark to parallelize the summation process for several csv files using the same python code. I've already done the following steps:
- I've created one master and two slave nodes on AWS.
- I've used the bash command
$ scp -r -i my-key-pair.pem my_dir root@ec2-52-27-82-124.us-west-2.compute.amazonaws.com
to upload directorymy_dir
including my python code with the csv files onto the cluster master node. - I've login my master node, and from there used the bash command
$ ./spark/copy-dir my_dir
to send my python code as well as csv files to all slave nodes. I've setup the environment variables on the master node:
$ export SPARK_HOME=~/spark
$ export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
However, when I run the python code on the master node: $ python sum.py
, it shows up the following error:
Traceback (most recent call last):
File "sum.py", line 18, in <module>
from pyspark import SparkConf, SparkContext
File "/root/spark/python/pyspark/__init__.py", line 41, in <module>
from pyspark.context import SparkContext
File "/root/spark/python/pyspark/context.py", line 31, in <module>
from pyspark.java_gateway import launch_gateway
File "/root/spark/python/pyspark/java_gateway.py", line 31, in <module>
from py4j.java_gateway import java_import, JavaGateway, GatewayClient
ImportError: No module named py4j.java_gateway
I have no ideas about this error. Also, I am wondering if the master node automatically calls all slave nodes to run in parallel. I really appreciate if anyone can help me.