I'm trying to integrate Spark 2.3.0 running on my Mac with S3. I can read/write to S3 without any problem using spark-shell. But when I try to do the same using a little Scala program that I run via sbt, I get java.lang.NoClassDefFoundError: org/apache/hadoop/fs/GlobalStorageStatistics$StorageStatisticsProvider.
I have installed hadoop-aws 3.0.0-beta1. I have also set s3 access information in spark-2.3.0/conf/spark-defaults.conf:
spark.hadoop.fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem
spark.hadoop.fs.s3a.access.key XXXX
spark.hadoop.fs.s3a.secret.key YYYY
spark.hadoop.com.amazonaws.services.s3.enableV4 true
spark.hadoop.fs.s3a.endpoint s3.us-east-2.amazonaws.com
spark.hadoop.fs.s3a.fast.upload true
spark.hadoop.fs.s3a.encryption.enabled true
spark.hadoop.fs.s3a.server-side-encryption-algorithm AES256
The program compiles fine using sbt version 0.13.
name := "S3Test"
scalaVersion := "2.11.8"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.2.0"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.2.0"
libraryDependencies += "org.apache.hadoop" % "hadoop-aws" % "3.0.0-beta1"
The scala code is:
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import com.amazonaws._
import com.amazonaws.auth ._
import com.amazonaws.services.s3 ._
import com.amazonaws. services.s3.model ._
import java.io._
import org.apache.hadoop.fs.FileSystem
import org.apache.hadoop.fs.s3a.S3AFileSystem
object S3Test {
def main(args: Array[String]) = {
val spark = SparkSession.builder().master("local").appName("Spark AWS S3 example").getOrCreate()
import spark.implicits._
val df = spark.read.text("test.txt")
df.take(5)
df.write.save(<s3 bucket>)
}
}
I have set environment variables for JAVA_HOME, HADOOP_HOME, SPARK_HOME, CLASSPATH, SPARK_DIST_CLASSPATH, etc. But nothing lets me get past this error message.