Given that I have the following directory structure with .
being the current working directory
.
\---foo
\---bar
\---__init__.py
\---baz.py
When I run python -c "import foo.bar.baz"
I get
Traceback (most recent call last):
File "<string>", line 1
ImportError: No module named foo.bar.baz
If I echo "" > foo/__init__.py
, the above command works.
Am I doing something wrong or do I misunderstand the point of __init__.py
? I thought it was to stop modules existing where they shouldn't, e.g. a directory named string
, but if you replace foo
with string
in my example, I'm seemingly forced to create the module that should never be used, just so I can reference a file deeper in the hierarchy.
Update
I'm working with a build system that's generating the __init__.py
's for me and is enforcing the directory structure and while I could mess with the hierarchy, I'd prefer to just add the __init__.py
myself. To change the question slightly, why do I need a python package at every level instead of just at the top? Is it just a rule that you can only import modules from the python path or from a chain of packages off of the python path?