In spark on Yarn you can enable dynamic allocation via following settings
spark.dynamicAllocation.enabled
spark.dynamicAllocation.minExecutors
spark.dynamicAllocation.maxExecutors
spark.dynamicAllocation.initialExecutors
If you refer to spark configuration page:
Dynamic Resource Allocation
Spark provides a mechanism to dynamically adjust the resources your application occupies based on the workload. This means that your application may give resources back to the cluster if they are no longer used and request them again later when there is demand. This feature is particularly useful if multiple applications share resources in your Spark cluster.
This feature is disabled by default and available on all coarse-grained cluster managers, i.e. standalone mode, YARN mode, and Mesos coarse-grained mode.
Link to documentation: https://spark.apache.org/docs/2.2.0/job-scheduling.html#configuration-and-setup
There are two requirements for using this feature. First, your application must set spark.dynamicAllocation.enabled to true. Second, you must set up an external shuffle service on each worker node in the same cluster and set spark.shuffle.service.enabled to true in your application.
I not sure on Azure how easy this is handled, here is a question about dynamic resources on EMR, not directly related but could help you in your searching.
Spark + EMR using Amazon's "maximizeResourceAllocation" setting does not use all cores/vcores