4

In my EMR spark cluster in the core machines I have an EBS volume size of 128GB. My Spark program is giving me "No space left on device" error. So, what I want is in EMR to point the temp directory that spark uses to a different file system "/dev/xvdb2" . How can I do that ?

Below is the output from one of my core machine.

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs         34G   78k   34G   1% /dev
tmpfs            34G     0   34G   0% /dev/shm
/dev/xvda1       11G  4.3G  6.1G  42% /
/dev/xvdb1      5.4G   35M  5.4G   1% /emr
/dev/xvdb2      133G   12G  121G   9% /mnt
dks551
  • 1,113
  • 1
  • 15
  • 39
  • Possible duplicate of [Why does a job fail with "No space left on device", but df says otherwise?](https://stackoverflow.com/questions/25707784/why-does-a-job-fail-with-no-space-left-on-device-but-df-says-otherwise) – 10465355 Jan 10 '19 at 17:31

1 Answers1

0

Try to put something similar in your spark/conf/spark-env.sh:

export SPARK_LOCAL_DIRS=/opt/spark/tmp

Majid Hajibaba
  • 3,105
  • 6
  • 23
  • 55
Feng Xue
  • 71
  • 4