0

I have a job that runs periodically on spark (hadoop version 3.1.1), written in Scala. I can tell the job finished successfully because we get a summary email and the files are where they should be, but when the spark context is shutting down it errors out for some reason:

Example 1:

20/03/20 10:55:16 INFO BlockManager: Removing RDD 3170
20/03/20 10:55:47 ERROR CoarseGrainedExecutorBackend: Executor self-exiting due to : Driver <server-name> disassociated! Shutting down.
20/03/20 10:55:47 INFO DiskBlockManager: Shutdown hook called
20/03/20 10:55:48 ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM
20/03/20 10:55:48 INFO ShutdownHookManager: Shutdown hook called

Example 2:

20/03/20 12:57:22 ERROR CoarseGrainedExecutorBackend: Executor self-exiting due to : Driver <server-name> disassociated! Shutting down.
20/03/20 12:57:22 INFO DiskBlockManager: Shutdown hook called
20/03/20 12:57:23 INFO ShutdownHookManager: Shutdown hook called

What could be causing this error?

Jared DuPont
  • 165
  • 2
  • 14

0 Answers0