0

I'm trying to provide a list of files for spark to read as and when it needs them (which is why I'd rather not use boto or whatever else to pre-download all the files onto the instance and only then read them into spark "locally").

os.environ['PYSPARK_SUBMIT_ARGS'] = "--master local[3] pyspark-shell"
spark = SparkSession.builder.getOrCreate()
spark.sparkContext._jsc.hadoopConfiguration().set('fs.s3.access.key', credentials['AccessKeyId'])
spark.sparkContext._jsc.hadoopConfiguration().set('fs.s3.access.key', credentials['SecretAccessKey'])
spark.read.json(['s3://url/3521.gz', 's3://url/2734.gz'])

No idea what local[3] is about but without this --master flag, I was getting another exception:

Exception: Java gateway process exited before sending the driver its port number.

Now, I'm getting this:

Py4JJavaError: An error occurred while calling o37.json.
: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "s3"
...

Not sure what o37.json refers to here but it probably doesn't matter.

I saw a bunch of answers to similar questions suggesting an addition of flags like:

os.environ['PYSPARK_SUBMIT_ARGS'] = "--packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.7.2 pyspark-shell"

I tried prepending it and appending it to the other flag but it doesn't work.

Just like the many variations I see in other answers and elsewhere on the internet (with different packages and versions), for example:

os.environ['PYSPARK_SUBMIT_ARGS'] = '--master local[*] --jars spark-snowflake_2.12-2.8.4-spark_3.0.jar,postgresql-42.2.19.jar,mysql-connector-java-8.0.23.jar,hadoop-aws-3.2.2,aws-java-sdk-bundle-1.11.563.jar'
rudolfovic
  • 3,163
  • 2
  • 14
  • 38

1 Answers1

1

A typical example for reading files from S3 is as below -

Additional you can go through this answer to ensure the minimalistic structure and necessary modules are in place - java.io.IOException: No FileSystem for scheme: s3

Read Parquet - S3

os.environ['PYSPARK_SUBMIT_ARGS'] = "--packages=com.amazonaws:aws-java-sdk-bundle:1.11.375,org.apache.hadoop:hadoop-aws:3.2.0 pyspark-shell"


sc = SparkContext.getOrCreate()
sql = SQLContext(sc)

hadoop_conf = sc._jsc.hadoopConfiguration()

config = configparser.ConfigParser()

config.read(os.path.expanduser("~/.aws/credentials"))

access_key = config.get("****", "aws_access_key_id")
secret_key = config.get("****", "aws_secret_access_key")
session_key = config.get("****", "aws_session_token")


hadoop_conf.set("fs.s3.aws.credentials.provider", "org.apache.hadoop.fs.s3.TemporaryAWSCredentialsProvider")
hadoop_conf.set("fs.s3a.access.key", access_key)
hadoop_conf.set("fs.s3a.secret.key", secret_key)
hadoop_conf.set("fs.s3a.session.token", session_key)

s3_path = "s3a://xxxx/yyyy/zzzz/"

sparkDF = sql.read.parquet(s3_path) 
Vaebhav
  • 4,672
  • 1
  • 13
  • 33