3

I am running a Spark job to calculate the interaction. After mapping I group by a key I want and Spark keep stuck in pending state without showing any error and unknown information of stage.

I want to know what may cause this and how do I check it, because I ran in local and it's normal.

Check the log there are no error message.

6/01/05 14:44:47 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(ExpireDeadHosts,true) from Actor[akka://sparkDriver/temp/$Sm]
16/01/05 14:44:47 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(ExpireDeadHosts,true)
16/01/05 14:44:47 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.262362 ms) AkkaMessage(ExpireDeadHosts,true) from Actor[akka://sparkDriver/temp/$Sm]

16/01/05 14:44:53 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(Heartbeat(driver,[Lscala.Tuple2;@5757087f,BlockManagerId(driver, localhost, 56860)),true) from Actor[akka://sparkDriver/temp/$Tm]
        16/01/05 14:45:03 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(BlockManagerHeartbeat(BlockManagerId(driver, localhost, 56860)),true)
        16/01/05 14:45:03 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.319169 ms) AkkaMessage(BlockManagerHeartbeat(BlockManagerId(driver, localhost, 56860)),true) from Actor[akka://sparkDriver/temp/$Wm]
        16/01/05 14:45:13 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(Heartbeat(driver,[Lscala.Tuple2;@682d459,BlockManagerId(driver, localhost, 56860)),true) from Actor[akka://sparkDriver/temp/$Xm]

I am using spark 1.5.2, and I test a on Amazon Instance.

netstat -a -o | grep 56860
tcp6       0      0 [::]:56860              [::]:*                  LISTEN      off (0.00/0/0)

I run job by command:

spark-submit --class com.knx.analytics.InteractionProcessor --files dev.conf --conf 'spark.executor.extraJavaOptions=-Dconfig.fuction.conf' --conf 'spark.driver.extraJavaOptions=-Dconfig.file=dev.conf' --jars fast-aggregate-assembly-1.0-deps.jar --driver-memory 5g fast-aggregate-1.jar -s 2015-11-02 -e 2015-11-06

UPDATE

ubuntu@adedge-bd-test:~ [23:20:53]$ jps -lm
10903 sun.tools.jps.Jps -lm
7834 org.apache.spark.deploy.SparkSubmit --conf spark.driver.memory=3g --conf spark.executor.extraJavaOptions=-Dconfig.fuction.conf --conf spark.driver.extraJavaOptions=-Dconfig.file=dev.conf --class com.knx.analytics.InteractionProcessor --files dev.conf --jars fast-aggregate-assembly-1.0-deps.jar fast-aggregate.jar -s 2015-11-02 -e 2015-11-02

Full jstack log is here

Some of that.

"main" prio=10 tid=0x00007f2bb8008000 nid=0x1ebd in Object.wait() [0x00007f2bc19d5000]
   java.lang.Thread.State: WAITING (on object monitor)
    at java.lang.Object.wait(Native Method)
    - waiting on <0x0000000744008a88> (a org.apache.spark.scheduler.JobWaiter)
    at java.lang.Object.wait(Object.java:503)
    at org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73)
    - locked <0x0000000744008a88> (a org.apache.spark.scheduler.JobWaiter)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:559)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1914)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1055)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:998)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:998)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
    at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:998)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:938)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:930)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:930)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
    at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:930)
    at com.knx.analytics.InteractionProcessor$.writeToMongo(InteractionProcessor.scala:150)
    at com.knx.analytics.InteractionProcessor$.main(InteractionProcessor.scala:138)
    at com.knx.analytics.InteractionProcessor.main(InteractionProcessor.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

   Locked ownable synchronizers:
    - None

After search found some thing related from here Stages

halfer
  • 19,824
  • 17
  • 99
  • 186
giaosudau
  • 2,211
  • 6
  • 33
  • 64

0 Answers0