It's not common to recommend building functions like this, but in this case, I think it's better to have to have static definitions with a little bit of duplication rather than relying on a dynamic function whose complex behavior is difficult to verify -
def apply_at_1 (f, *a, **kw):
return lambda x, *a2, **kw2: \
f(x, *a, *a2, **kw, **kw2)
def apply_at_2 (f, *a, **kw):
return lambda x, y, *a2, **kw2: \
f(x, y, *a, *a2, **kw, **kw2)
def apply_at_3 (f, *a, **kw):
return lambda x, y, z, *a2, **kw2: \
f(x, y, z, *a, *a2, **kw, **kw2)
def dummy (a, b, c, d, *more):
print (a, b, c, d, *more)
apply_at_1(dummy, 2, 3, 4)(1, 5, 6, 7, 8)
# 1 2 3 4 5 6 7 8
apply_at_2(dummy, 3, 4, 5)(1, 2, 6, 7, 8)
# 1 2 3 4 5 6 7 8
apply_at_3(dummy, 4, 5, 6)(1, 2, 3, 7, 8)
# 1 2 3 4 5 6 7 8
Sure, the dynamic function allows you to apply an argument in any position, but if you need to apply an argument at position four (the 5th argument), maybe it's time to consider a refactor. In this case I think it's better to set a sensible limit and then discipline yourself.
Other functional languages do similar things for typing reasons. Consider Haskell's liftA2
, liftA3
, liftA4
, etc. Or Elm's List.map
, List.map2
, List.map3
, List.map4
, ...
Sometimes it's just easier to KISS.
Here's a version of partial
supporting wild-cards, __
-
def __ ():
return None;
def partial (f, *a):
def combine (a, b):
if not a:
return b
elif a[0] is __:
return (b[0], *combine(a[1:], b[1:]))
else:
return (a[0], *combine(a[1:], b))
return lambda *b: f(*combine(a,b))
def dummy (a, b, c, d, *more):
print (a, b, c, d, *more)
partial(dummy, 1, __, __, 4)(2, 3, 5, 6, 7, 8)
# 1 2 3 4 5 6 7 8
partial(dummy, __, __, 3, __, __, 6, 7, 8)(1, 2, 4, 5)
# 1 2 3 4 5 6 7 8