6

I think I'm encountering jar incompatibility. I'm using the following jar files to build a spark cluster:

  1. spark-2.4.7-bin-hadoop2.7.tgz
  2. aws-java-sdk-1.11.885.jar
  3. hadoop:hadoop-aws-2.7.4.jar
from pyspark.sql import SparkSession, SQLContext
from pyspark.sql.types import *
from pyspark.sql.functions import *
import sys

spark = (SparkSession.builder
         .appName("AuthorsAges")
         .appName('SparkCassandraApp')
         .getOrCreate())


spark._jsc.hadoopConfiguration().set("fs.s3a.access.key", "access-key")
spark._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "secret-key")
spark._jsc.hadoopConfiguration().set("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem")
spark._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", "true")
spark._jsc.hadoopConfiguration().set("fs.s3a.aws.credentials.provider","org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider")
spark._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "")


input_file='s3a://spark-test-data/Fire_Department_Calls_for_Service.csv'

file_schema = StructType([StructField("Call_Number",StringType(),True),
        StructField("Unit_ID",StringType(),True),
        StructField("Incident_Number",StringType(),True),
...
...
# Read file into a Spark DataFrame
input_df = (spark.read.format("csv") \
            .option("header", "true") \
            .schema(file_schema) \
            .load(input_file))

The code fails when it starts to execute the spark.read.format. It appears that it can't find the class. java.lang.NoClassDefFoundError: com/amazonaws/AmazonServiceException.

My spark-defaults.conf is configured as follows:

spark.jars.packages                com.amazonaws:aws-java-sdk:1.11.885,org.apache.hadoop:hadoop-aws:2.7.4

I would appreciate if someone can help me. Any ideas?

Traceback (most recent call last):
  File "<stdin>", line 5, in <module>
  File "/usr/local/spark/spark-3.0.1-bin-hadoop2.7/python/pyspark/sql/readwriter.py", line 178, in load
    return self._df(self._jreader.load(path))
  File "/usr/local/lib/python3.6/site-packages/py4j/java_gateway.py", line 1305, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/usr/local/spark/spark-3.0.1-bin-hadoop2.7/python/pyspark/sql/utils.py", line 128, in deco
    return f(*a, **kw)
  File "/usr/local/lib/python3.6/site-packages/py4j/protocol.py", line 328, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o51.load.
: java.lang.NoClassDefFoundError: com/amazonaws/AmazonServiceException
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195)
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:46)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:366)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297)
    at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:286)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:286)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:232)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.amazonaws.AmazonServiceException
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
    ... 30 more
Alex Ott
  • 80,552
  • 8
  • 87
  • 132
banjoman
  • 141
  • 2
  • 6

2 Answers2

3

hadoop-aws 2.7.4 uses aws-java-sdk 1.7.4 that isn't completely compatible with newer versions, so if you use the newer version of aws-java-sdk, then Hadoop can't find required classes. You have following choice:

  • remove explicit dependency on the aws-java-sdk - if you don't need newer functionality
  • compile Spark 2.4 with Hadoop 3 using hadoop-3.1 profile, as described in documentation
  • switch to Spark 3.0.x that already has version built with Hadoop 3.2
Alex Ott
  • 80,552
  • 8
  • 87
  • 132
  • I followed option 3 and also used the hadoop-aws-3.2.0.jar and aws-java-sdk-1.11.887.jar. Also upgraded to python 3.8.6 but I'm getting a different NoClassDefFoundError on com.amazonaws.services.s3.model.MultiObjectDeleteException. I opened a new one: https://stackoverflow.com/questions/64563127/pyspark-s3-error-java-lang-noclassdeffounderror-com-amazonaws-services-s3-mode – banjoman Oct 27 '20 at 21:29
  • Aws sdk often isn’t binary compatible between versions. Why do you need this specific version? – Alex Ott Oct 27 '20 at 21:30
  • Not specificially. I ran out of ideas so I tried to use all the latest versions hoping it would fix it. Honestly I can use any version of the jars so long as it achieves what I need to do which is to read s3 files using pyspark with python 3.x. What versions of jars that worked for you? – banjoman Oct 27 '20 at 21:52
  • Hadoop-aws should have all necessary dependencies. Newer Hadoop versions are usually needed only for specific new functionality, like, custom encryption keys, etc. – Alex Ott Oct 27 '20 at 21:55
  • So I cleaned-up everything and re-installed the following versions of jars and it worked: hadoop-aws-2.7.4.jar, aws-java-sdk-1.7.4.2.jar. Spark install version: spark-2.4.7-bin-hadoop2.7. Python version: Python 3.8.6. Thank you for help!!! – banjoman Oct 27 '20 at 22:45
  • Correction. I used python 3.6 – banjoman Oct 27 '20 at 23:01
  • This has a beautiful explanation: https://notadatascientist.com/running-apache-spark-and-s3-locally/ – pnv May 05 '21 at 01:32
3

I encountered the same problem and I was able to resolve it thanks to https://notadatascientist.com/running-apache-spark-and-s3-locally/

steps to follow:

  1. check hadoop version installed with spark (in spar jars)
  2. download hadoop-aws jar with the same version of hadoop.
  3. download aws-java-sdk jar (check which version was used when hadoop-aws was developped)
SCouto
  • 7,808
  • 5
  • 32
  • 49