I have a program with several submodules. I want the submodules to be usable independently, so that I can use some of them in other programs as well. However, the submodules have inter-dependency, requiring aspects of each other to run. What is the least problematic way to deal with this?
Right now, I have structured my program like this:
myapp/
|-- __init__.py
|-- app.py
|-- special_classes
| |-- __init__.py
| `-- tools
| `-- __init__.py
|-- special_functions
| |-- __init__.py
| `-- tools
| `-- __init__.py
`-- tools
|-- __init__.py
|-- classes.py
`-- functions.py
Where each submodule is a a git submodule
of its parent.
The advantage of this is that I can manage and develop each submodule independently, and adding one of these submodules to a new project is as simple as git clone
ing and git submodule add
ing it. Because I work in a managed, shared computing environment, this also makes it easy to run the program since user environment management & software versioning and installation are contentious issues.
The disadvantage is that in this example, I now have 3 copies of the tools
submodule, which are independent of each other and have to be manually updated everytime there is a change in any of them. Doing any sort of development on the submodules becomes very cumbersome. Similarly, it has now tripled the number of unit tests I run, since tests get run in each submodule and there are 3 copies of the tools
module.
I have seen various importing methods, such as those mentioned here but that does not seem like an ideal solution for this.
I have read about how to create a formal Python package here but this seems like a large undertaking and will make it much more difficult for my end users to actually install and run the program.
Another relevant question asked here