For instance, let's say we have a Matrix class that implements matrices.
We want to to be able to use the '+' operator to add 2 matrices, or to add 1 matrix with 1 number (which is actually going to be defined as adding the number to each element of the matrix).
In C++, we could have done (skipping details with '...'):
class Matrix
{
...
Matrix operator + (Matrix A, Matrix B)
{
... // process in some way
}
Matrix operator + (Matrix A, double x)
{
... // process in some other way
}
}
But in Python as I understand, multiple definitions override the previous ones.
class Matrix:
...
def __add__(A, B):
... # process in some way
def __add__(A, x):
... # process in some other way
This does NOT work: only the last method definition is active.
So the first solution I could think of was to define only one method and parse arguments (according to their types, number, or keywords).
For example, with type-checking we could do something like (let's call it technique #1):
class Matrix:
...
def __add__(A, X):
if isinstance(X, Matrix)
... # process in some way
elif isinstance(X, (int, float, complex))
... # process in some other way
else:
raise TypeError("unsupported type for right operand")
But I have often read that type-checking is not 'pythonic', so what else?
Moreover, in this case, there are always two arguments, but more generally what if we want to be able to process different numbers of arguments ?
To clear things up, let's say we have a method named 'mymethod' and we want to be able to call, say:
mymethod(type1_arg1)
mymethod(type2_arg1)
mymethod(type3_arg1, type4_arg2)
Let's also assume that:
- each versions process the same object so it must be one unique method,
- each processing is different and that arguments types matter.
---------------------------------------------------------- EDIT ----------------------------------------------------------
Thanks everyone for your interest in this topic.
A good alternative to type-checking was suggested below by @thegreatemu (let's call it technique #2). It uses the duck-typing paradigm and exception processing. But as far as I know it only works when the number of arguments in the same in all cases.
From what I understood from answers here and on the linked topic, when we want to process different numbers of arguments, we can use keyword arguments. Here is an example (let's call it technique #3).
class Duration:
"""Durations in H:M:S format."""
def __init__(hours, mins=None, secs=None):
"""Works with decimal hours or H:M:S format."""
if mins and secs: # direct H:M:S
... # checks
self.hours = hours
self.mins = mins
self.secs = secs
else: # decimal
... # checks
... # convert decimal hours to H:M:S
Or we can also use variable length argument *args (or **kwargs). Here is the alternate version with *args and length checking (let's call it technique #4).
class Duration:
"""Durations in H:M:S format."""
def __init__(*args): # direct H:M:S
"""Works with decimal hours or H:M:S format."""
if len(args) == 3 # direct H:M:S format
... # checks
self.hours = args[0]
self.mins = args[1]
self.secs = args[2]
elif len(args) == 1: # decimal
... # checks
... # convert decimal hours to H:M:S
else:
raise TypeError("expected 1 or 3 arguments")
What is best between techniques #3 and #4 in this case? Matter of taste maybe.
But as I vaguely mentioned in the original post, decorators may also be used. In particular, the linked topic mentions the overloading module that provides an API similar to C++ overloading with @overload
decorator. See PEP3124.
It certainly looks handy (and that would be technique #5)!