You can use the spark context available in the pyspark shell under 'spark' Spark session variable as follows
spark.sparkContext.addPyFile('Path to your file')
As per spark-docs .py or .zip dependency with python code is supported in this.
| addPyFile(self, path)
| Add a .py or .zip dependency for all tasks to be executed on this
| SparkContext in the future. The C{path} passed can be either a local
| file, a file in HDFS (or other Hadoop-supported filesystems), or an
| HTTP, HTTPS or FTP URI.
|
| .. note:: A path can be added only once. Subsequent additions of the same path are ignored.
Below is the successful import and function call after using zip
>>> sc.addPyFile('D:\pyspark_test.zip')
>>> import test
>>> test
<module 'test' from 'C:\\Users\\AppData\\Local\\Temp\\spark-f4559ba6-0661-4cea-a841-55d7550d809d\\userFiles-062f5965-e5df-4d26-b2cd-daf7613df56a\\pyspark_test.zip\\test.py'>
>>> test.print_data()
hello
>>>
Make sure you have the zip file structure as follows. While creating zip select all the induvidual files in the module and then create a zip instead of selecting the module folder and then creating the zip file
└───pyspark_test
test.py
_init_.py