Here is a scala version that works fine with Spark 3.2.1 (pre-built) with Hadoop 3.3.1, accessing a S3 bucket from a non AWS machine [typically a local setup on a developer machine]
sbt
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "3.2.1" % "provided",
"org.apache.spark" %% "spark-streaming" % "3.2.1" % "provided",
"org.apache.spark" %% "spark-sql" % "3.2.1" % "provided",
"org.apache.hadoop" % "hadoop-aws" % "3.3.1",
"org.apache.hadoop" % "hadoop-common" % "3.3.1" % "provided"
)
spark program
val spark = SparkSession
.builder()
.master("local")
.appName("Process parquet file")
.config("spark.hadoop.fs.s3a.path.style.access", true)
.config("spark.hadoop.fs.s3a.access.key", ACCESS_KEY)
.config("spark.hadoop.fs.s3a.secret.key", SECRET_KEY)
.config("spark.hadoop.fs.s3a.endpoint", ENDPOINT)
.config(
"spark.hadoop.fs.s3a.impl",
"org.apache.hadoop.fs.s3a.S3AFileSystem"
)
// The enable V4 does not seem necessary for the eu-west-3 region
// see @stevel comment below
// .config("com.amazonaws.services.s3.enableV4", true)
// .config(
// "spark.driver.extraJavaOptions",
// "-Dcom.amazonaws.services.s3.enableV4=true"
// )
.config("spark.executor.instances", "4")
.getOrCreate()
spark.sparkContext.setLogLevel("ERROR")
val df = spark.read.parquet("s3a://[BUCKET NAME]/.../???.parquet")
df.show()
note: region is in the form s3.[REGION].amazonaws.com
e.g. s3.eu-west-3.amazonaws.com
s3 configuration
To make the bucket available from outside of AWS, add a Bucket Policy of the form:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::[ACCOUNT ID]:user/[IAM USERNAME]"
},
"Action": [
"s3:Delete*",
"s3:Get*",
"s3:List*",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::[BUCKET NAME]/*"
}
]
}
The supplied ACCESS_KEY and SECRET_KEY to the spark configuration must be those of the IAM user configured on the bucket