I'm using r4.4xlarge which has 16 Vcpus and 8 virtual cores
https://aws.amazon.com/ec2/pricing/on-demand/
https://aws.amazon.com/ec2/virtualcores/
3 questions:
1) Now how should I calculate spark.executor.cores for my spark application in a standalone cluster , this was always a confusing calculation for me
2)I guess virtual cores are at instance level not at vcpu level?
3) For instance if have 3 worker nodes and one master node all having above configuration and if I have to submit multiple spark applications at same time ,will both run at same time ? Or one job will eat up all resources though not required and another will stay in queue or if resources are Available will both applications get submitted at same time?
Note:I’m using spark rest api to submit above 2 applications as 2 separate spark submits