I just noticed a subtle issue, when using addPyFile
in PySpark and autoreload
in Jupyter Notebooks.
What happens is that I have some code in modules that I use in PySpark UDFs, so (as far as I know) these modules need to be added as dependencies via addPyFile
, so that they are available on executors. I zip these and use addPyFile
after I set up a PySpark session.
However, after this, changes in those modules are not autoreloaded, e.g. when working in IPython/Jupyter Notebook. More specifically, running
import mypath.mymodule as mm
import imp
imp.reload(mm)
would normally show the path to the module, e.g. <module 'mypath.mymodule' from 'C:\\project\\mypath\\mymodule.py'>
However, after initializing Spark and submitting the zipped files via addPyFile
, the above code would output something like: <module 'mypath.mymodule' from 'C:\\Users\\username\\AppData\\Local\\Temp\\spark-....\\userFiles-...\\zipped_modules.zip\\mypath\\mymodule.py'>
I.e. it seems like Spark switches to use the code from the zipped (and cached) modules (even in driver program), thus changes in the original module would not get autoreloaded in the driver program.
Is there a way to keep autoreloading the changes from the modules in the driver program in this scenario?