I'm looking for a way to easily execute parametrized run of Jupyter Notebooks, and I've found Papermill Project (https://github.com/nteract/papermill/)
This tool seems to match my requirements, but I can't find any reference for PySpark kernel support.
Is PySpark kernels supported by papermill executions?
If it is, there is some configuration to be done to connect it to the Spark cluster used by Jupyter?
Thanks in advance for the support, Mattia