0

I'm an experimental physicist, trying to automate a relatively simple (but sensitive) optimization in my measurements that is currently done completely manually and takes up a lot of my time. After some thinking and tips in the comments, I've reduced the problem to the following:

There is some function f(x) that I want to maximize. However, I can only evaluate f(x); I cannot evaluate its derivative explicitly. Moreover, I cannot sample a large range of x; if f(x) < threshold, I am in trouble (and it takes me over a day to recover). Luckily, I have one starting value x_0 such that f(x_0) > threshold, and I can guess some initial step size eps for which f(x_0 + eps) > threshold also holds (however, I don't know if f(x_0 + eps) > or < f(x_0) before evaluating). Could someone suggest an algorithmic/adaptive/feedback protocol to find the x that maximizes f(x) up to some tolerance x_tol? So far I've found the golden-section search, but that requires choosing some range (a,b) over which you want to maximize which I cannot do; I can't start from some wide range as that might bring me below my tolerance.

How I currently do it manually is as follows: I evaluate f(x_0) and then f(x_0 + eps). If this leads to a decrease, I evaluate f(x_0-eps) instead. Based on the gradient (essentially I just look at if there are big steps or large steps w.r.t the threshold I cannot cross), I either increase or decrease eps and continue searching in the same direction until a maximum is found, which I notice because f(x) starts decreasing. I then go back to that maximum. This way I am always probing the top part of the maximum and thus remain in a safe range.

user129412
  • 765
  • 1
  • 6
  • 19
  • I would say you need to define the problem or break it down such as [finding local optimum](https://www.google.com.au/url?sa=t&source=web&rct=j&url=https://stackoverflow.com/questions/4624970/finding-local-maxima-minima-with-numpy-in-a-1d-numpy-array&ved=2ahUKEwjJrvTfh8zdAhWSHXAKHeJjCIoQjjgwAHoECAUQAQ&usg=AOvVaw0i9eQ4opZ6njmKuVlNJbVx). – lloyd Sep 21 '18 at 12:10
  • I broke it down a little further. – user129412 Sep 21 '18 at 13:43

1 Answers1

0

I would say you need to define the problem or break it down such as finding local optimum. Or Gradient descent

lloyd
  • 1,683
  • 2
  • 19
  • 23
  • That second one seems very relevant. I'll have a look at that. However, I have an unknown function rather than a known one, so symbolic math won't really help. But the structure might. – user129412 Sep 21 '18 at 12:15
  • Yeah, the trouble I run into is that I cannot evaluate the derivative. One can do it with finite differences, but that seems inefficient. – user129412 Sep 21 '18 at 13:43