I am developing a spark application which is using the following spark context:
org.apache.spark.sql.SparkSession sparkSession = org.apache.spark.sql.SparkSession.builder()
.master("local")
.appName("example of SparkConnection")
.config("spark.executor.instances", 10000)
.getOrCreate();
I am trying to read data from csv and write it into the db tables. For this, I have a file of size 100KB. I want more than 1 executors to be used to read and write the file.
I have tried increasing the partitions using repartition method but still I am getting only one executor. Can some one help?