3

I am trying to figure out how to interpolate a function, and I want to be able to extrapolate a small way beyond the interpolation range. There is some background theory which leads me to expect that the behavior for large values of the dependent variable x will asymptote to −A·xn behaviour (maybe with some offsets). I want to find the best fit power law, but only for large values of x. Is there a good way of implementing this in Python?

I tried to do least squares for the log-linearized version of my power law, but it gave me an inappropriate fit - which I suspect is because of a coding error rather than a math mistake. I was thinking of doing this by introducing an offset into the power law, so that the log equation would be ln(−y) = ln(A) − n ln(xx0). What I have so far is:

X_asymptote=[]
for i in aX_vals1:
    if i>20*lab_inc(T-1):
        X_asymptote.append(i)
VT1_optimized_asymptote=[]
for i in X_asymptote:
    a=aX_vals1.index(i)
    VT1_optimized_asymptote.append(VT1_optimized_val[a])
#Converting to logs. Note the minus sign! You'll need to put this back in in the unravelling.

X_asymptote_logs=[]
for i in X_asymptote:
    X_asymptote_logs.append(np.log(i))
X_asymptote_logs_array=np.asarray(X_asymptote_logs)

VT1_optimized_asymptote_logs=[]
for i in VT1_optimized_asymptote:
    VT1_optimized_asymptote_logs.append(np.log(-i))
    #VT1_optimized_asymptote_logs.append(-1*np.log(i))
VT1_optimized_asymptote_logs_array=np.asarray(VT1_optimized_asymptote)


#Linear fit
fitfunc=lambda p,x:p[0]+p[1]*x
errfunc=lambda p,x,y:(y-fitfunc(p,x)) 

out=scipy.optimize.leastsq(errfunc,[-4,5],args=(X_asymptote_logs_array,VT1_optimized_asymptote_logs_array),full_output=1)
pfinal=out[0]
covar=out[1]

index=pfinal[1]
amp=np.exp(pfinal[0])

xnew=np.linspace(X_asymptote_logs[0],aX_vals1_array[len(aX_vals1_array)-1],100)
ynew=VT1_optimized(xnew)
plt.title("Optimized value function, linear extrapolation")
plt.plot(aX_vals1_array, VT1_optimized_val_array, 'o', xnew, ynew, '-')
plt.ylabel('Value function with optimized asset allocation')
plt.xlabel('ratio X/Y')
plt.show()

Could anyone give me a hand with this?

Any help is much appreciated!

Update: I have a pragmatic solution to the problem. I just fitted the power law for the whole data range. It seems to have worked remarkably well, and the code now does want I want, but not in the way I was hoping to understand with this comment. The code I now have is:

#Linear fit
fitfunc=lambda p,x:p[0]+p[1]*(x)
errfunc=lambda p,x,y:(y-fitfunc(p,x)) 

out=scipy.optimize.leastsq(errfunc,[-4,5],args=(np.asarray(X_asymptote_logs),np.asarray(VT1_optimized_asymptote_logs)),full_output=1)
pfinal=out[0]
covar=out[1]

index=pfinal[1]
amp=np.exp(pfinal[0])
print(index,amp)

def VPL(X):
    return -(amp*(X)**(index))

def VT1_optimized1(X):
    if aX_vals1_array[len(aX_vals1_array)-1]>=X:
        VT1_op=scipy.interpolate.UnivariateSpline(aX_vals1_array,VT1_optimized_val_array,k=3,s=0,check_finite=True)
        return VT1_op(X)
    if X>aX_vals1_array[len(aX_vals1_array)-1]:
        VT1_asymptote=scipy.interpolate.UnivariateSpline(aX_vals1_array,VT1_optimized_val_array,k=5,s=0,check_finite=True)
        return VPL(X)


xnew=np.linspace(aX_vals1_array[0],aX_vals1_array[len(aX_vals1_array)-1]*2,100)

y0=[]
for i in xnew:
    y0.append(VT1_optimized1(i))
ynew=np.asarray(y0)
plt.title("Optimized value function, linear extrapolation")
plt.plot(np.asarray(aX_vals1), np.asarray(VT1_optimized_val), 'o', xnew, ynew, '-')
plt.ylabel('Value function with optimized asset allocation')
plt.xlabel('X')
plt.show()

#This seems to work almost too well! Let's check the asset allocation.




# Plotting
# xnew=np.linspace(aX_vals1_array[0],aX_vals1_array[len(aX_vals1_array)-1],100)
# ynew=VT1_optimized(xnew)
# plt.title("Optimized value function, linear extrapolation")
# plt.plot(aX_vals1_array, VT1_optimized_val_array, 'o', xnew, ynew, '-')
# plt.ylabel('Value function with optimized asset allocation')
# plt.xlabel('ratio X/Y')
# plt.show()


# xnew=np.linspace(aX_vals1_array[0],(aX_vals1_array[len(aX_vals1_array)-1]),100)
# y0=[]
# for i in xnew:
#     y0.append(VT1_optimized1(i))
# ynew=np.asarray(y0)
# plt.title("Optimized value function, lin. extrapolation")
# plt.plot(aX_vals1_array, VT1_optimized_val_array, 'o', xnew, ynew, '-')
# plt.ylabel('asset allocation')
# plt.xlabel('X')
# plt.show()
# exit()

0 Answers0