I am trying to run my Spark Word Count program using YARN as Cluster Manager. I am using Hadoop 2.6 & YARN is configured to run in Pseudo distributed mode. The Application Submit is failing
spark-submit --master yarn --class sbook.helloworld.WordCount
target/scala-2.11/sparkbookapp_2.11-1.0.jar
src/main/resources/data.txt output
Given below is the error I am able to retrieve from Node Manager(Checkout the log with level ERROR
). Looks like this seems to the be issue with Container Allocation, but not able to find out the reason
INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1434487429379_0008_000002
INFO spark.SecurityManager: Changing view acls to: mountain
INFO spark.SecurityManager: Changing modify acls to: mountain
INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(mountain); users with modify permissions: Set(mountain)
WARN util.Utils: Your hostname, mountain resolves to a loopback address: 127.0.1.1; using 10.0.0.6 instead (on interface wlan0)
WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
INFO slf4j.Slf4jLogger: Slf4jLogger started
INFO Remoting: Starting remoting
INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkYarnAM@10.0.0.6:42711]
INFO util.Utils: Successfully started service 'sparkYarnAM' on port 42711.
INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
INFO yarn.ApplicationMaster: Driver now available: 10.0.0.6:34231
INFO yarn.ApplicationMaster: Listen to driver: akka.tcp://sparkDriver@10.0.0.6:34231/user/YarnScheduler
INFO yarn.ApplicationMaster: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> mountain, PROXY_URI_BASES -> http://mountain:8088/proxy/application_1434487429379_0008),/proxy/application_1434487429379_0008)
INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
15/06/16 16:11:02 INFO yarn.YarnRMClient: Registering the ApplicationMaster
INFO yarn.YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 5000
ERROR yarn.ApplicationMaster: RECEIVED SIGNAL 15: SIGTERM
INFO yarn.ApplicationMaster: Final app status: UNDEFINED, exitCode: 0, (reason: Shutdown hook called before final status was reported.)
INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with UNDEFINED (diag message: Shutdown hook called before final status was reported.)
INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1434487429379_0008
I have also tried changing the size of executor to 1g in spark-submit
script. But that did not help either and the node manager was still trying to create an executor of size 1408 MB.