0

I want to find the local maximum from a start vector (numpy array). For that I want to use the very simple gradient decent algorithm where I can set the maximum step size manually. Is there an implementation in scipy.optimize or somewhere else for Python-3? It needs to be multivariate optimisation unconstrained and it cannot be anything fancy like "Nelder-Mead simplex algorithm", "BFGS", "conjugate gradient", "stochastic gradient descent" or anything else. I need the algorithm to follow the gradient for each step - nothing more. I am able to provide the gradient of my function.

Obviously gradient descent is pretty easy to implement oneself. But with a canonical implementation I'd have one thing less to unit-test. It would seems strange that it needs to be implemented by oneself just because it is simple.

J. Doe
  • 1
  • 3
  • Like this http://stackoverflow.com/questions/17784587/gradient-descent-using-python-and-numpy? – Ed Smith Nov 01 '16 at 17:27
  • @EdSmith: Seems like it, but an "official" implementation - like in scipy. – J. Doe Nov 01 '16 at 18:01
  • Use [autograd](https://github.com/HIPS/autograd) for automatic differentiation and add the one line needed for the gradient-descent steps. – sascha Nov 01 '16 at 22:16
  • @sascha: Actually I can provide the gradient myself. But the lib seems pretty neat. It sounds like it does not get the gradient my numeric approximation, but by code analysis - is that right? – J. Doe Nov 02 '16 at 12:00

0 Answers0