I'm noticing that I get unexpected results when I call functions as lambdas in a dict.
The application is price-sensitivity estimation (i.e., the change in each segment's demand as a result of a change in price):
segments = ['A','B','C'] # price segments
p = {'A': 200, 'B': 200, 'C': 200} # baseline price
d = {'A': 150, 'B': 100, 'C': 70} # baseline demand
e = {'A': -0.4, 'B': -0.5, 'C': -0.8} # elasticity estimate
print( d['A'] * ( 1 + e['A']*(100 - p['A'])/p['A'] ) ) # 180, as expected
print( d['B'] * ( 1 + e['B']*(200 - p['B'])/p['B'] ) ) # 100, as expected
print( d['C'] * ( 1 + e['C']*(300 - p['C'])/p['C'] ) ) # 42, as expected
For reusability, I'd like to store the demand function for each segment as a lambda, in a dict keyed to the segment name. My attempt:
demand = { s: lambda x: d[s] * ( 1 + e[s]*(x - p[s])/p[s] )
for s in segments
}
But calling these lambdas gives different results from the previous calculations:
print(demand['A'](100)) # 98, expected 180
print(demand['B'](200)) # 70, expected 100
print(demand['C'](300)) # 42, as expected
What am I doing wrong?