0

I am developing a spark application which is using the following spark context:

org.apache.spark.sql.SparkSession sparkSession = org.apache.spark.sql.SparkSession.builder()
  .master("local")
  .appName("example of SparkConnection")
  .config("spark.executor.instances", 10000)
  .getOrCreate();

I am trying to read data from csv and write it into the db tables. For this, I have a file of size 100KB. I want more than 1 executors to be used to read and write the file.

I have tried increasing the partitions using repartition method but still I am getting only one executor. Can some one help?

Olaf Kock
  • 46,930
  • 8
  • 59
  • 90
Ritika Garg
  • 77
  • 1
  • 6
  • Does this answer your question? [How to allocate more executors per worker in Standalone cluster mode?](https://stackoverflow.com/questions/29955133/how-to-allocate-more-executors-per-worker-in-standalone-cluster-mode) – Lamanus Aug 25 '20 at 14:17
  • 1
    I see that you are using `local` master. With `local` master the spark application is executed in single JVM on local machine itself. There is not much significance of executor instances with `local` master. – vatsal mevada Aug 25 '20 at 14:20

0 Answers0