Is there a way to determine the total number of task slots that will be required to run the job from either the execution plan or in some other way without having to actually start the job first.
According to this doc: https://ci.apache.org/projects/flink/flink-docs-stable/concepts/runtime.html
"A Flink cluster needs exactly as many task slots as the highest parallelism used in the job. No need to calculate how many tasks (with varying parallelism) a program contains in total."
If I get the execution plan from StreamExecutionEnvironment(after setup but without actually executing the job) and get the max parallelism for any node from the list of nodes in the execution plan json, would that be sufficient to determine the number of task slots required to run the job.
Are there any situations where this ceases to be the case? Or any caveats to keep in mind?