0

I have a dataset in my s3 bucket "sqlnew" test is directory and test is my file when trying to execute below pyspark code throwing error.

import os
import sys
os.environ['SPARK_HOME'] = "/home/hadoop/spark/spark-2.2.0-bin-hadoop2.7"
sys.path.append("/home/hadoop/spark/spark-2.2.0-bin-hadoop2.7/python/")
sys.path.append("/home/hadoop/spark/spark-2.2.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip")
from pyspark.sql import SQLContext,SparkSession
spark = SparkSession.builder\
    .appName("test")\
    .getOrCreate()
sc = spark.sparkContext
sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", "AKIttttttJQxxxx")
sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", "vfttt+A9yqtt+114AttttttttvKejCevccc")
myRDD = sc.textFile("s3n://sqlnew/test/rtest").count()

after executing count it throwing error like below.

File "<ipython-input-39-0c6df03c6adc>", line 11, in <module>
    myRDD = sc.textFile('s3n://sqlnew/test/rtest').count()

  File "/home/hadoop/spark/spark-2.2.0-bin-hadoop2.7/python/pyspark/rdd.py", line 1041, in count
    return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()

  File "/home/hadoop/spark/spark-2.2.0-bin-hadoop2.7/python/pyspark/rdd.py", line 1032, in sum
    return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)

  File "/home/hadoop/spark/spark-2.2.0-bin-hadoop2.7/python/pyspark/rdd.py", line 906, in fold
    vals = self.mapPartitions(func).collect()

  File "/home/hadoop/spark/spark-2.2.0-bin-hadoop2.7/python/pyspark/rdd.py", line 809, in collect
    port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())

  File "/home/hadoop/spark/spark-2.2.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
    answer, self.gateway_client, self.target_id, self.name)

  File "/home/hadoop/spark/spark-2.2.0-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)

  File "/home/hadoop/spark/spark-2.2.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
    format(target_id, ".", name), value)

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: java.io.IOException: No FileSystem for scheme: s3n
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194)

same file i am able to accesses using boto3 module please find below code.

import boto3
import json

s3 = boto3.resource('s3',use_ssl=False,
                     aws_access_key_id="AKIttttttJQxxxx",
                     aws_secret_access_key="vfttt+A9yqtt+114AttttttttvKejCevccc")
content_object = s3.Object('sqlnew', 'test/rtest')
file_content = content_object.get()['Body'].read().decode('utf-8')
print(file_content)
output:
975078|56691|2.000|20171001_926_570_1322
975078|42993|1.690|20171001_926_570_1322
975078|46462|2.000|20171001_926_570_1322
975078|87815|1.000|20171001_926_570_1322  

Please help on this how can i resolve above pyspark issue?

Thanks in advance.

Sai
  • 1,075
  • 5
  • 31
  • 58

1 Answers1

0

To resolve the above issue we need to copy the hadoop-aws-2.7.3.jar file to spark-2.2.0-bin-hadoop2.7\jars location. because this not default spark jar.

https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws/2.7.3
Sai
  • 1,075
  • 5
  • 31
  • 58