2

I've been trying to get the spark-deep-learning library working on my EMR cluster to be able to read images in parallel with Python 2.7. I have been searching for this for quite some time now and I have failed to reach a solution. I have tried setting different configuration settings in the conf for the sparksession and I get the following error when trying to create a SparkSession object

ERROR SparkContext:91 - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
   at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
   at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
   at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
   at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
   at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
   at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
   at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
   at py4j.Gateway.invoke(Gateway.java:238)
   at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
   at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
   at py4j.GatewayConnection.run(GatewayConnection.java:214)
   at java.lang.Thread.run(Thread.java:748)

The above was the result when using jupyter notebook. I tried submitted the py file with spark submit and adding the jar I need to use as a value for --jars, --driver-class-path and for the --conf spark.executor.extraClassPath as discussed by this link.Here is the code I submit along with the resulting import error :

bin/spark-submit --jars /home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar /
--driver-class-path /home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar /
--conf spark.executor.extraClassPath=/home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar /
/home/hadoop/RunningCode6.py 

Traceback (most recent call last):
  File "/home/hadoop/RunningCode6.py", line 74, in <module>
  from sparkdl import KerasImageFileTransformer
ImportError: No module named sparkdl

The library works fine in a standalone mode, but I keep getting either one of the above stated errors when I use the cluster mode.

I really hope someone can help me solve this because I've been staring at it for weeks now and I need to get it working

Thanks!

nmh26
  • 31
  • 2

0 Answers0