Well it is unclear to me, why are you using a minimization for such a problem? You can simply normalize the matrix and then compute alpha
using the normalized matrix and the old one. For matrix normalization look here.
In your code, your objective function includes a division by zero (1-g_at_one/alpha)
so the function is not defined in 0 and that is why I assume scipy
is jumping it.
EDIT:
So I simply reformulated your problem and used a constraint, added some prints for better visualizations. I hope this helps:
import numpy as np
from scipy.optimize import minimize
# just defining some parameters
N = 100
g = np.ones(N)
eta = np.array([i/100 for i in range(N)])
g_at_one = 0.01
def f(alpha):
g = alpha*(1+(1-g_at_one/alpha)*np.exp((eta[:]-eta[N-1])/2)*(1/np.sqrt(3)*np.sin(np.sqrt(3)/2*(eta[:] - eta[N-1])) - np.cos(np.sqrt(3)/2*(eta[:] - eta[N-1]))))
to_be_minimized = np.sum(g[:])/N
print("+ For alpha: %7s => f(alpha): %7s" % ( round(alpha[0],3),
round(to_be_minimized,3) ))
return to_be_minimized
cons = {'type': 'ineq', 'fun': lambda alpha: f(alpha) - 1}
result_of_minimization = minimize(f,
x0 = 0.1,
constraints = cons,
tol = 1e-8,
options = {'disp': True})
alpha_at_min = result_of_minimization.x
# verify
print("\nAlpha at min: ", alpha_at_min[0])
alpha = alpha_at_min
g = alpha*(1+(1-g_at_one/alpha)*np.exp((eta[:]-eta[N-1])/2)*(1/np.sqrt(3)*np.sin(np.sqrt(3)/2*(eta[:] - eta[N-1])) - np.cos(np.sqrt(3)/2*(eta[:] - eta[N-1]))))
print("Verification: ", round(np.sum(g[:])/N - 1) == 0)
Output:
+ For alpha: 0.1 => f(alpha): 0.021
+ For alpha: 0.1 => f(alpha): 0.021
+ For alpha: 0.1 => f(alpha): 0.021
+ For alpha: 0.1 => f(alpha): 0.021
+ For alpha: 0.1 => f(alpha): 0.021
+ For alpha: 0.1 => f(alpha): 0.021
+ For alpha: 0.1 => f(alpha): 0.021
+ For alpha: 7.962 => f(alpha): 1.0
+ For alpha: 7.962 => f(alpha): 1.0
+ For alpha: 7.962 => f(alpha): 1.0
+ For alpha: 7.962 => f(alpha): 1.0
+ For alpha: 7.962 => f(alpha): 1.0
+ For alpha: 7.962 => f(alpha): 1.0
+ For alpha: 7.962 => f(alpha): 1.0
+ For alpha: 7.962 => f(alpha): 1.0
+ For alpha: 7.962 => f(alpha): 1.0
+ For alpha: 7.962 => f(alpha): 1.0
+ For alpha: 7.962 => f(alpha): 1.0
+ For alpha: 7.962 => f(alpha): 1.0
Optimization terminated successfully. (Exit mode 0)
Current function value: 1.0000000000000004
Iterations: 3
Function evaluations: 9
Gradient evaluations: 3
Alpha at min: 7.9620687892224264
Verification: True