Let's start with why your first one works.
Assuming you have a file structure such that:
c.py
m - |
|
__init__.py
a.py
b.py
Let's say your current directory is within the m
folder and you run the command.
python b.py
If within your b.py
you place something like:
import a
import sys
print(sys.path)
it will work just fine. But let's take a look at the output from print(sys.path)
.
On windows, this is will look something like ['C:\\path\\to\\myprogram\\m', '<path to python installation>', etc.]
The important thing is that the m
folder is on your sys.path
and that is how import a
gets resolved.
However if we go up one level and run:
python c.py
you should immediately notice that m
is no longer on your sys.path
and instead is replaced by 'C:\\path\\to\\myprogram'
.
That is why it fails. Python automatically includes your current working directory in sys.path
and changing out of it means it no longer knows where to look for the import.
This is an example of an absolute import. You can manipulate where python looks for these imports by modifying sys.path
to include the path of the file you want to import.
sys.path.append('C:\\path\\to\\myprogram\\m')
But there's a better way to do it. If m
is a package or sub-package (includes an __init__.py
) you can use a relative import.
from . import a
However, there is a small caveat to this.
You can only use relative imports when the file using them is being run as a module or package.
So running them directly as a top level script
python b.py
will produce
ImportError: attempted relative import with no known parent package
Python luckily has the ability to run a script as a module built in.
if you cd
one level up so as to encapsulate your m
package, you can run
python -m m.b
and have it run just fine.
Additionally, since b.py
is being treated as a module in c.py
because of the import
, the relative import will work in this case as well.
python c.py