0
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: ResultStage 9 (runJob at FileFormatWriter.scala:237) has failed the maximum allowable number of times: 4. Most recent failure reason: org.apache.spark.shuffle.FetchFailedException: The relative remote executor(Id: 156), 
which maintains the block data to fetch is dead.
 at org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:747)
 at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:662)
 at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:70)
 at org.apache.spark.util.CompletionIterator.next(CompletionIterator.scala:29)
 at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
 at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
 at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
 at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
 at 

What can be done to fix this?

greybeard
  • 2,249
  • 8
  • 30
  • 66
湘晗刚
  • 1
  • 1
  • Could you please share some code that can help understand what are you trying to do? – Islam Elbanna Jun 08 '23 at 18:22
  • read hive then write redis – 湘晗刚 Jun 09 '23 at 01:16
  • It is useful to share some code, also are there any other error logs? – Islam Elbanna Jun 09 '23 at 08:53
  • at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:794) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2234) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:237) ... 43 more – 湘晗刚 Jun 10 '23 at 02:52
  • The task was re-executed successfully, but I still want to find out the root cause – 湘晗刚 Jun 10 '23 at 02:54

0 Answers0