I try to make my code modular because it's too long, the problem is I don't know whether i'm doing it safely. I segmented my code into different files, so 1 python file runs the others, sometimes I have to call 1 file that will run another file that will run another file, so multiple chained commands.
The issue is that some of the files will process sensitive information like passwords, so I don't know whether I do it safely. Ideally after 1 file is executed, it should close itself, and delete all variables from it's memory, and free that space, like it normally would as if I were to just execute 1 file, the problem is that I don't know whether if I call multiple files nested into one another, this applies. Obviously only the file that is executed should clear itself, not the one that is active, but I don't know if this is the case.
I have been calling my modules like this
os.system('python3 ' + filename)
And in each file subsequently the same code calling another file with os.system
, forming a nested or chained call system.
For example if I call the first file from shell:
python3 file1.py
and then file1 calls:
os.system('python3 file2.py')
and then file2 calls:
os.system('python3 file3.py')
I would want file3 cleaned from the memory and closed entirely after it runs, whereas file2 and file1 might still be active. I don't want file3 to be still inside the memory after it executed itself. So if file3 works with passwords, it should obviously clean them from the memory after it runs.
How to do this?
I have read about multiple options:
from subprocess import call
call(["python3", "file2.py"])
import subprocess
subprocess.call("file2.py", shell=True)
execfile('file2.py')
import subprocess
subprocess.Popen("file2.py", shell=True)
Which one is safer?