5

The ternary search algorithm is a fast algorithm for finding the minimum or maximum of a unimodal function, a function that either increases and then decreases or decreases and then increases. Assume that we are dealing with a function that decreases, then increases, and that we want to find the minimum of value. Ternary search works using the following recursion:

  • If the size of the window is "small enough," just return its midpoint.
  • Otherwise:
    • Evaluate the function at the left and right boundaries; call the values l and r.
    • Evaluate the function at the 1/3 and 2/3 points; call the values m1 and m2
    • If m1 < m2, then we are in the increasing region of the function and the minimum can't be between m2 and r.
    • If m1 > m2, then we are in the decreasing region of the function can the minimum can't be between l and m1
    • Recursively search the 2/3 that weren't discarded.

This algorithm works quickly because it can keep tossing out 1/3 of the values on each iteration.

However, I feel like I'm missing something because I believe we can make this algorithm run much faster. In particular, notice that we're always throwing out one of the thirds of the range between a boundary and one of the probe points. This means that we retain the region between the probe point and the other boundary. Because ternary search picks the probe points at the 1/3 points, this means that we retain 2/3 of the values at each point. What if instead of probing at the 1/3 and 2/3 points, we probed at the 1/2 - ε and 1/2 + ε points for an extremely small ε? This would mean that we would be tossing out 1/2 - ε of the range on each step, which means that the size of the range would decrease much faster than if we were just tossing 1/3 of the elements each time.

As an example, if I pick ε = 1/1000, we get to toss out 999/2000 of the range to search on each iteration. The fraction remaining after some number of iterations is shown here (ternary search is on the left, my algorithm is on the right:)

 1 :                1.0  >=                1.0
 2 :     0.666666666667  >=             0.5005
 3 :     0.444444444444  >=         0.25050025
 4 :     0.296296296296  >=     0.125375375125
 5 :     0.197530864198  >=    0.0627503752501
 6 :     0.131687242798  >=    0.0314065628127
 7 :    0.0877914951989  >=    0.0157189846877
 8 :    0.0585276634659  >=   0.00786735183621
 9 :    0.0390184423106  >=   0.00393760959402
10 :    0.0260122948737  >=   0.00197077360181
11 :    0.0173415299158  >=  0.000986372187705
12 :    0.0115610199439  >=  0.000493679279947
13 :   0.00770734662926  >=  0.000247086479613
14 :   0.00513823108617  >=  0.000123666783046
15 :   0.00342548739078  >=  6.18952249147e-05
16 :   0.00228365826052  >=  3.09785600698e-05
17 :   0.00152243884035  >=  1.55047693149e-05
18 :   0.00101495922690  >=  7.76013704213e-06
19 :  0.000676639484599  >=  3.88394858959e-06

Is this modified version of the algorithm just "better" than the original version? Or is there something I'm missing here that would mean that I shouldn't use the modified strategy for picking the probe points?

GEOCHET
  • 21,119
  • 15
  • 74
  • 98
templatetypedef
  • 362,284
  • 104
  • 897
  • 1,065
  • Late, but for anybody else who might come across this question, the OP describes the intuition behind a method called "Dichotomous Search". – Bretton Wade May 25 '21 at 17:26

2 Answers2

3

This version will certainly converge faster, but might be more prone to floating-point precision issues. For example, what if you get m + eps = m? That is a real possibility if m is large, say. So there is a tradeoff between accuracy and rate of convergence. The best algorithm of this class arguably is Golden Section Search, which is both fast and accurate.

Peteris
  • 3,548
  • 4
  • 28
  • 44
  • 1
    By my understanding, the advantage of golden section search is that it evaluates the function less frequently because it can reuse probes across iterations, not because it is more numerically stable. Am I incorrect about this? – templatetypedef Dec 14 '11 at 19:58
  • That's exactly the case. It is quite stable as well though compared to the extreme bisecting algorithm, since the middle two points are reasonably spread out. – Peteris Dec 14 '11 at 20:00
1

If a function is unimodal, than g(y) = F(y+ε) - F(y) crosses zero only once, when increasing y from the left to the right boundary.

Basically, what you propose in your second/improved algorithm is a binary search for the zero-crossing (root) of g(y). Assume that you are doing minimization, so F(y) has a single minimum. Then G(y) is negative for a while, and then positive. Thus, if g(x) <0, then the root is bigger than x (or more precise: x+ε), and if g(x)>0, then the root is smaller than x.

This is faster (worst case) than your first/original algorithm, because the region where the minimum is located is halved each step, instead of multiplied by 2/3.

willem
  • 2,617
  • 5
  • 26
  • 38
  • This is, in fact, exactly the intuition I had when coming up with this algorithm and exactly why I'm so confused by ternary search. :-) – templatetypedef Dec 14 '11 at 20:06