I'm an experimental physicist, trying to automate a relatively simple (but sensitive) optimization in my measurements that is currently done completely manually and takes up a lot of my time. After some thinking and tips in the comments, I've reduced the problem to the following:
There is some function f(x) that I want to maximize. However, I can only evaluate f(x); I cannot evaluate its derivative explicitly. Moreover, I cannot sample a large range of x; if f(x) < threshold, I am in trouble (and it takes me over a day to recover). Luckily, I have one starting value x_0 such that f(x_0) > threshold, and I can guess some initial step size eps for which f(x_0 + eps) > threshold also holds (however, I don't know if f(x_0 + eps) > or < f(x_0) before evaluating). Could someone suggest an algorithmic/adaptive/feedback protocol to find the x that maximizes f(x) up to some tolerance x_tol? So far I've found the golden-section search, but that requires choosing some range (a,b) over which you want to maximize which I cannot do; I can't start from some wide range as that might bring me below my tolerance.
How I currently do it manually is as follows: I evaluate f(x_0) and then f(x_0 + eps). If this leads to a decrease, I evaluate f(x_0-eps) instead. Based on the gradient (essentially I just look at if there are big steps or large steps w.r.t the threshold I cannot cross), I either increase or decrease eps and continue searching in the same direction until a maximum is found, which I notice because f(x) starts decreasing. I then go back to that maximum. This way I am always probing the top part of the maximum and thus remain in a safe range.