5

Been unsuccessful setting a spark cluster that can read AWS s3 files. The software I used are as follows:

  1. hadoop-aws-3.2.0.jar
  2. aws-java-sdk-1.11.887.jar
  3. spark-3.0.1-bin-hadoop3.2.tgz

Using python version: Python 3.8.6

from pyspark.sql import SparkSession, SQLContext
from pyspark.sql.types import *
from pyspark.sql.functions import *
import sys

spark = (SparkSession.builder
         .appName("AuthorsAges")
         .appName('SparkCassandraApp')
         .getOrCreate())


spark._jsc.hadoopConfiguration().set("fs.s3a.access.key", "access-key")
spark._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "secret-key")
spark._jsc.hadoopConfiguration().set("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem")
spark._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", "true")
spark._jsc.hadoopConfiguration().set("fs.s3a.aws.credentials.provider","org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider")
spark._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "")


input_file='s3a://spark-test-data/Fire_Department_Calls_for_Service.csv'

file_schema = StructType([StructField("Call_Number",StringType(),True),
        StructField("Unit_ID",StringType(),True),
        StructField("Incident_Number",StringType(),True),
...
...
# Read file into a Spark DataFrame
input_df = (spark.read.format("csv") \
            .option("header", "true") \
            .schema(file_schema) \
            .load(input_file))

The code fails when it starts to execute the spark.read.format. It appears that it can't find the class. java.lang.NoClassDefFoundError: com.amazonaws.services.s3.model.MultiObjectDeleteException

  File "<stdin>", line 1, in <module>
  File "/usr/local/spark/spark-3.0.1-bin-hadoop3.2/python/pyspark/sql/readwriter.py", line 178, in load
    return self._df(self._jreader.load(path))
  File "/usr/local/spark/spark-3.0.1-bin-hadoop3.2/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
  File "/usr/local/spark/spark-3.0.1-bin-hadoop3.2/python/pyspark/sql/utils.py", line 128, in deco
    return f(*a, **kw)
  File "/usr/local/spark/spark-3.0.1-bin-hadoop3.2/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o51.load.
: java.lang.NoClassDefFoundError: com/amazonaws/services/s3/model/MultiObjectDeleteException
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2532)
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2497)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2593)
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3269)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3301)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
    at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:46)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:366)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297)
    at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:286)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:286)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:232)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.amazonaws.services.s3.model.MultiObjectDeleteException
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)

I've been trying to find the right combination for the above jars and python but I couldn't find the right mix. I'm getting all kinds of NoClassDefFoundError so I decided to use the latest versions of all the jars and python I listed above but still unsuccessful.

I would like to know what versions of jars and python did you use to have successfully setup a cluster that is able to access s3 using s3a via pyspark? Thank you in advance for response/help.

banjoman
  • 141
  • 2
  • 6

4 Answers4

2

Hadoop 3.2 was built against 1.11.563; stick the full shaded sdk of that specific version in your classpath "aws-java-sdk-bundle" and all should be well.

The SDK has been "fussy" in the past...and upgrade invariably causes surprises. For the curious Qualifying an AWS SDK update. It's probably about time someone does it again.

stevel
  • 12,567
  • 1
  • 39
  • 50
2

I was able to solve this issue on Spark 3.0/ Hadoop 3.2. I documented my answer here as well - AWS EKS Spark 3.0, Hadoop 3.2 Error - NoClassDefFoundError: com/amazonaws/services/s3/model/MultiObjectDeleteException

Use following AWS Java SDK bundle and this issue will be solved -

aws-java-sdk-bundle-1.11.874.jar (https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-bundle/1.11.874)

Prateek Dubey
  • 209
  • 2
  • 7
0

So I cleaned-up everything and re-installed the following versions of jars and it worked: hadoop-aws-2.7.4.jar, aws-java-sdk-1.7.4.2.jar. Spark install version: spark-2.4.7-bin-hadoop2.7. Python version: Python 3.6.

banjoman
  • 141
  • 2
  • 6
0

In addition to the answers, I will add my 2 cents on top of them. Although adding the aws-java-sdk-bundle works perfectly, I found it better to add the specific dependencies to make the package smaller.

I replaced aws-java-sdk-bundle by aws-java-sdk-sts, aws-java-sdk-s3, and aws-java-sdk-dynamodb. The final size went from ~200MB to ~125MB.

Felipe
  • 7,013
  • 8
  • 44
  • 102