15

I installed pyspark with pip. I code in jupyter notebooks. Everything works fine but not I got a java heap space error when exporting a large .csv file. Here someone suggested editing the spark-defaults.config. Also in the spark documentation, it says

"Note: In client mode, this config must not be set through the SparkConf directly in your application, because the driver JVM has already started at that point. Instead, please set this through the --driver-memory command line option or in your default properties file."

But I'm afraid there is no such file when installing pyspark with pip. I'm I right? How do I solve this?

Thanks!

icy
  • 1,468
  • 3
  • 16
  • 36
smaica
  • 723
  • 2
  • 11
  • 26

4 Answers4

8

I recently ran into this as well. If you look at the Spark UI under the Classpath Entries, the first path is probably the configuration directory, something like /.../lib/python3.7/site-packages/pyspark/conf/. When I looked for that directory, it didn't exist; presumably it's not part of the pip installation. However, you can easily create it and add your own configuration files. For example,

mkdir /.../lib/python3.7/site-packages/pyspark/conf
vi /.../lib/python3.7/site-packages/pyspark/conf/spark-defaults.conf
santon
  • 4,395
  • 1
  • 24
  • 43
3

The spark-defaults.conf file should be located in:

$SPARK_HOME/conf

If no file is present, create one (a template should be available in the same directory).

How to find the default configuration folder

Check contents of the folder in Python:

import glob, os
glob.glob(os.path.join(os.environ["SPARK_HOME"], "conf", "spark*"))
# ['/usr/local/spark-3.1.2-bin-hadoop3.2/conf/spark-env.sh.template',
#  '/usr/local/spark-3.1.2-bin-hadoop3.2/conf/spark-defaults.conf.template']

When no spark-defaults.conf file is available, built-in values are used

To my surprise, no spark-defaults.conf but just a template file was present!

Still I could look at Spark properties, either in the “Environment” tab of the Web UI http://<driver>:4040 or using getConf().getAll() on the Spark context:

from pyspark.sql import SparkSession
spark = SparkSession \
        .builder \
        .appName("myApp") \
        .getOrCreate()

spark.sparkContext.getConf().getAll()
# [('spark.driver.port', '55128'),
#  ('spark.app.name', 'myApp'),
#  ('spark.rdd.compress', 'True'),
#  ('spark.sql.warehouse.dir', 'file:/path/spark-warehouse'),
#  ('spark.serializer.objectStreamReset', '100'),
#  ('spark.master', 'local[*]'),
#  ('spark.submit.pyFiles', ''),
#  ('spark.app.startTime', '1645484409629'),
#  ('spark.executor.id', 'driver'),
#  ('spark.submit.deployMode', 'client'),
#  ('spark.app.id', 'local-1645484410352'),
#  ('spark.ui.showConsoleProgress', 'true'),
#  ('spark.driver.host', 'xxx.xxx.xxx.xxx')]

Note that not all properties are listed but:

only values explicitly specified through spark-defaults.conf, SparkConf, or the command line. For all other configuration properties, you can assume the default value is used.

For instance, consider the default parallelism is in my case:

spark._sc.defaultParallelism
8

This is the default for local mode, namely the number of cores on the local machine--see https://spark.apache.org/docs/latest/configuration.html. In my case 8=2x4cores because of hyper-threading.

If passed the property spark.default.parallelism when launching the app

spark = SparkSession \
        .builder \
        .appName("Set parallelism") \
        .config("spark.default.parallelism", 4) \
        .getOrCreate()

then the property is shown in the Web UI and in the list

spark.sparkContext.getConf().getAll()

Precedence of configuration settings

Spark will consider given properties in this order (spark-defaults.conf comes last):

  1. SparkConf
  2. flags passed to spark-submit
  3. spark-defaults.conf

From https://spark.apache.org/docs/latest/configuration.html#dynamically-loading-spark-properties:

Properties set directly on the SparkConf take highest precedence, then flags passed to spark-submit or spark-shell, then options in the spark-defaults.conf file. A few configuration keys have been renamed since earlier versions of Spark; in such cases, the older key names are still accepted, but take lower precedence than any instance of the newer key.

Note Some pyspark Jupyter kernels contain flags for spark-submit in the environment variable $PYSPARK_SUBMIT_ARGS, so one might want to check that too.

Related question: Where to modify spark-defaults.conf if I installed pyspark via pip install pyspark

user2314737
  • 27,088
  • 20
  • 102
  • 114
0

The spark-defaults.config file is needed when we have to change any of the default configs for spark.

As @niuer suggested, it should be present in the $SPARK_HOME/conf/ directory. But that might not be the case with you. By default, a template config file will be present there. You can just add a new spark-defaults.conf file in $SPARK_HOME/conf/.

user2314737
  • 27,088
  • 20
  • 102
  • 114
Rishabh Sairawat
  • 440
  • 6
  • 17
-1

Check your spark path. There are configuration files under: $SPARK_HOME/conf/, e.g. spark-defaults.conf.

niuer
  • 1,589
  • 2
  • 11
  • 14