I'm using a bissection algorithm to find the maximum of a function.
Here is my code:
from math import exp
h = 6.62606876e-34
c = 2.99792458e8
k = 1.3806504e-23
T = 7000
epsilon = 1.e-10
def fct(lam) :
return 2*h**2*c**3*exp(h*c/(lam*k*T))/(k*T*lam**7*(exp(h*c/(lam*k*T)) - 1)) - 10*c**2*h/(lam**6*(exp(h*c/(lam*k*T)) - 1))
lam1 = 100.e-9
lam2 = 1000.e-9
delta = lam2 - lam1
f1 = fct(lam1)
f2 = fct(lam2)
while delta > epsilon :
lamm = 0.5*(lam1 + lam2)
fm = fct(lamm)
f2=fct(lam2)
if fm*f2 > 0 :
lam2 = lamm
else :
lam1 = lamm
delta = lam2 - lam1
print('racine de la fonction: {}+/-{}'.format(lamm,delta))
The problem is that because of the finite precision of floats, I get a division by zero error when the line fm = fct(lamm) is evaluated.
How can I fix this issue?