Since I'm not quite following here's at least a start of an MCVE. Project dir structure:
project/
__init__.py
run.py
package/
__init__.py
a.py
b.py
constants.py
Starting from package
directory (the inner one) I have:
__init__.py
from .a import ModelA
from .b import ModelB
a.py
from . import constants
class ModelA:
def __init__(self, a, b):
print("Instantiating Model A with {0} {1}".format(a, b))
print(" Pi:{0} hbar{1}".format(constants.pi, constants.hbar))
b.py
from . import constants
class ModelB:
def __init__(self, a, b):
print("Instantiating Model B with {0} {1}".format(a, b))
print(" Pi:{0} hbar{1}".format(constants.hbar, constants.pi))
constants.py
hbar = 1
pi = 3.14
Note that the init has content purely for convenience of importing project
as a package and having ModelA
and ModelB
names available under it immediately. I coud have just as easily left an empty __init__.py
file and then from project.package.a import ModelA
.
Much like how it is done in project
dir. The top dir (project
) has an empty __init__.py file and a:
run.py
#!/usr/bin/evn python
import argparse
import package
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('--param1', dest='param1')
parser.add_argument('--param2', dest='param2')
args = parser.parse_args()
package.ModelA(args.param1, args.param2)
package.ModelB(args.param2, args.param1)
Note that the shebang link might not be necessary and is situation-dependent when running on a cluster where environment management can play a role.
Running this from the terminal should get you
$:> python3 project/run.py --param1 10 --param2 100
Instantiating Model A with 10 100
Pi:3.14 hbar1
Instantiating Model B with 100 10
Pi:1 hbar3.14
Now take this and improve what you have or reconstruct what you're trying to do in simplified terms like these (hopefully breaking the example) and then post back which part isn't working and why.
Edit
Let me preface the solution with a statement that doing things like this is setting yourself up for a failure. What it looks to me is like you have a run.py
file in which you want to parse the arguments sent in over the terminal and use them to set global state of your program execution. You should not do that, almost ever (the only exceptions I know of is that sometimes and only sometimes there is use in setting global state for an engine or a session when connecting to a database but even thenit's not usually the only or the best solution).
This is the very reason why you want modules and packages. There should be no reason why you would not be able to parse your input in run.py
and **call** the functionality in any of your submodules. How many parameters the functions in a
and b
take or whether they take and use all or none of the sent parameters literally makes no difference. You could edit the above example so that the class A and B only need 1 or 3 or 10, or A 5 and B none parameters and it would still work.
a.py
from . import constants
from project.run import args
print("from A", args)
class ModelA:
def __init__(self, a, b):
print("Instantiating Model A with {0} {1}".format(a, b))
print(" Pi:{0} hbar{1}".format(constants.pi, constants.hbar))
b.py
from . import constants
from project.run import args
print("from B", args)
class ModelB:
def __init__(self, a, b):
print("Instantiating Model B with {0} {1}".format(a, b))
print(" Pi:{0} hbar{1}".format(constants.hbar, constants.pi))
run.py
#!/usr/bin/evn python
import argparse
from . import package
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('--param1', dest='param1')
parser.add_argument('--param2', dest='param2')
args = parser.parse_args()
if __name__ == "__main__":
package.ModelA(args.param1, args.param2)
package.ModelB(args.param2, args.param1)
And then invoke it as
$:> python3 -m project.run --param1 10 --param2 100
from A Namespace(param1='10', param2='100')
from B Namespace(param1='10', param2='100')
Instantiating Model A with 10 100
Pi:3.14 hbar1
Instantiating Model B with 100 10
Pi:1 hbar3.14
Notice that not much has changed. Almost anything really, we imported args
from run.py
by using absolute import paths instead of relative ones and then we moved the execution code into if __name__ == "__main__":
so that it would not get called on every import - only the one that makes that script the "main" program. The only bigger and important difference made was invocation of the script. The terminal command python3 -m project.run
will import the module and then run it as a script whereas the previously used python3 project/run.py
will just try and run the run.py
as a script. When the module is first imported its __package__
value gets set; when __package__
is set it enables explicit relative imports, thus the from project.run import ...
statements work because now python knows where it has to go to look for those values. Whereas when run.py
is run just like a script, then when the import package
statement gets invoked in the old run.py
Python goes into the package/
directory but does not know it might need to go back up one level (to run.py
) to search for values defined in there and import them back to a lower level in package/
dir.