PEP 8 states:
Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants.
However if the class/method/function that I am importing is only used by a child process, surely it is more efficient to do the import when it is needed? My code is basically:
p = multiprocessing.Process(target=main,args=(dump_file,))
p.start()
p.join()
print u"Process ended with exitcode: {}".format(p.exitcode)
if os.path.getsize(dump_file) > 0:
blc = BugLogClient(listener='http://21.18.25.06:8888/bugLog/listeners/bugLogListenerREST.cfm',appName='main')
blc.notifyCrash(dump_file)
main() is the main application. This functions needs a lot of imports to run and those take up some ram space (+/- 35MB). As the application runs in another process, the imports were being done twice following PEP 8 (once by the parent process and another one by the child process). It should also be noted that this function should only be called once as the parent process is waiting to see if the application crashed and left an exitcode (thanks to faulthandler). So I coded the imports inside the main function like this:
def main(dump_file):
import shutil
import locale
import faulthandler
from PySide.QtCore import Qt
from PySide.QtGui import QApplication, QIcon
instead of:
import shutil
import locale
import faulthandler
from PySide.QtCore import Qt
from PySide.QtGui import QApplication, QIcon
def main(dump_file):
Is there an 'standard' way to handle imports done using multiprocessing?
PS: I´ve seen this sister question