I guess that the desire to get the value from the type is to avoid code duplication, and enable broader refactors. But let's think about it a second...
Let's consider code duplication. We don't want to have to write the same literal value twice. But here's the thing, we're going to have to write down something twice, either the type or the literal, so why not the literal?
Let's consider enabling refactors. In this case we're worried that if we change the literal value of the type then code using the existing value will no longer work, it would be nice if we could change them all at once. Notice that the problem solved by the type-checker is adjacent to this one: when you change that value it will warn you everywhere that that value is no longer valid.
In this case you can opt to use an Enum
to put the literal value inside the Literal
type:
from typing import Literal, overload
from enum import Enum
class E(Enum):
opt1 = 'opt1'
opt2 = 'opt2'
@overload
def f(x: Literal[E.opt1]) -> str:
...
@overload
def f(x: Literal[E.opt2]) -> int:
...
def f(x: E):
if x == E.opt1:
return 'got 0'
elif x == E.opt2:
return 123
raise ValueError(x)
a = f(E.opt1)
b = f(E.opt2)
reveal_type(a)
reveal_type(b)
# > mypy .\tmp.py
# tmp.py:28: note: Revealed type is "builtins.str"
# tmp.py:29: note: Revealed type is "builtins.int"
# Success: no issues found in 1 source file
Now when I want to change the "value" of E.opt1
no one else even cares, and when I want to change the "name" of E.opt1
to E.opt11
a refactoring tool will do it everywhere for me.
The "main problem" with this is that it will require users to use the Enum, when the whole point was trying to provide a convenient, value-based but type-safe, interface, right? Consider the following, enum
-less code:
from typing import Literal, overload, get_args
from enum import Enum
TOpt1 = Literal['opt1']
@overload
def f(x: TOpt1) -> str:
...
@overload
def f(x: Literal['opt2']) -> int:
...
def f(x):
if x == get_args(TOpt1):
return 'got 0'
elif x == 'opt2':
return 123
raise ValueError(x)
a = f('opt1')
b = f('opt2')
reveal_type(a)
reveal_type(b)
# > mypy .\tmp.py
# tmp.py:24: note: Revealed type is "builtins.str"
# tmp.py:25: note: Revealed type is "builtins.int"
I put both styles of checking the value of the argument in there: def f(x: TOpt1)
and if x == get_args(TOpt1)
vs def f(x: Literal['opt2'])
and elif x == 'opt2'
. While the first style is "better" in some abstract sense, I would not write it that way unless TOpt1
appears in multiple places (multiple overloads, or different functions). If it's just to be used in the one function for the one overload then I absolutely would just use the values directly and not bother with get_args
and defining type aliases, because in the actual definition of f
I would much rather look at a value than wonder about a type-argument.