How does polymorphism work, under the hood, in python?
In python, if I have some function e.g.
def f(x):
return x + 2*x + 3*x + 4*x + 5*x + 6*x
then according to dis.dis(f)
python translates this to bytecode instructions which describe a cycle of:
- loading the next constant value
- loading
x
again - multiplying them together
- adding the product (onto the accumulation of preceding terms)
But if x
is a numpy
array or python class, rather than a basic data type, then presumably the interpreter must do additional work (e.g. the binary multiply op-code must somehow lead other functions to be called, perhaps starting with some attribute lookups, which usually correspond to entirely different op-codes). This seems very different from ordinary assembly language, where a simple arithmetic operation would be atomic (and not cause the CPU to also execute extra instructions that aren't visible in the dissassembly listing).
Is there documentation for how the python interpreter operates, and what sequence of steps it actually performs when evaluating an expression involving polymorphism? (Ideally, at a lower level of detail then what a step-through python debugger would expose?)
Edit:
To support polymorphism, an arithmetic operation must also involve not only arithmetic but also type checking, attribute look-up, conditional jumps, and function calls. (All these things have their own op-codes.) Is it correct that cpython implements this by making the arithmetic op-code itself perform many complex actions in a single step of the interpreter (except for the instructions contained in the called function), instead of by stepping the interpreter through a sequence of separate op-codes to achieve the same result (e.g., LOAD_ATTR, CALL_FUNCTION, etc)?
Is there any documentation such as a table, for all op-codes, describing all of the actions each op-code may cause?