The Python documentation for the import statement (link) contains the following:
The public names defined by a module are determined by checking the module’s namespace for a variable named
__all__
; if defined, it must be a sequence of strings which are names defined or imported by that module.
The Python documentation for modules (link) contains what is seemingly a contradictory statement:
if a package’s
__init__.py
code defines a list named__all__
, it is taken to be the list of module names that should be imported whenfrom package import *
is encountered.
It then gives an example where an __init__.py
file imports nothing, and simply defines __all__
to be some of the names of modules in that package.
I have tested both ways of using __all__
, and both seem to work; indeed one can mix and match within the same __all__
value.
For example, consider the directory structure
foopkg/
__init__.py
foo.py
Where __init__.py
contains
# Note no imports
def bar():
print("BAR")
__all__ = ["bar", "foo"]
NOTE: I know one shouldn't define functions in an __init__.py
file. I'm just doing it to illustrate that the same __all__
can export both names that do exist in the current namespace, and those which do not.
The following code runs, seemingly auto-importing the foo module:
>>> from foopkg import *
>>> dir()
[..., 'bar', 'foo']
Why does the __all__
attribute have this strange double-behaviour?
The docs seem really unclear on how it is supposed to be used, only mentioning one of its two sides in each place I linked. I understand the overall purpose is to explicitly set the names imported by a wildcard import, but am confused by the additional, seemingly auto-importing behaviour. Is this just a magic shortcut that avoids having to write the import out as well?