I want to find the local maximum from a start vector (numpy array). For that I want to use the very simple gradient decent algorithm where I can set the maximum step size manually. Is there an implementation in scipy.optimize or somewhere else for Python-3? It needs to be multivariate optimisation unconstrained and it cannot be anything fancy like "Nelder-Mead simplex algorithm", "BFGS", "conjugate gradient", "stochastic gradient descent" or anything else. I need the algorithm to follow the gradient for each step - nothing more. I am able to provide the gradient of my function.
Obviously gradient descent is pretty easy to implement oneself. But with a canonical implementation I'd have one thing less to unit-test. It would seems strange that it needs to be implemented by oneself just because it is simple.