I have a spark application which loads data from csv files, calls Drools engine, uses flatmap and saves results to output files (.csv)
Below are the two test cases :
1) When I am running this Application with 2 Worker nodes having same configuration (8 cores) Application takes 5.2 minutes to complete :
No. Of Executors : 133
Total No of Cores : 16
Used Memory : 2GB( 1GB per executor)
Available Memory : 30GB
2) When I am running this Application with 3 Worker nodes having same configuration (8 cores) Application takes 7.6 minutes to complete :
- No. Of Executors : 266
- Total No of Cores : 24
- Used Memory : 3GB( 1GB per executor)
- Available Memory : 45GB
Expected result
It should take less time after adding one more worker node with same configuration.
Actual Result
It takes more time after adding one more worker node with same configuration.
I am running application using spark-submit
command in standalone mode.
Here I want to understand that why increasing worker node doesn't increase the performance, is that not the correct expectation?
EDIT
After looking at other similar question on stackoverflow I tried running application with spark.dynamicAllocation.enabled=true
, however it further degrades the performance.