1

In hill climbing for 1 dimension, I try two neighbors - a small delta to the left and one to the right of my current point, and then keep the one that gives a higher value of the objective function. How do I extend it to an n-dimensional space? How does one define a neighbor for an n-dimensional space? Do I have to try 2^n neighbors (a delta applied to each of the dimension)?

max_max_mir
  • 1,494
  • 3
  • 20
  • 36
  • Check out [this answer for finding the nearest neighbor in high dimensional data](http://stackoverflow.com/questions/5751114/nearest-neighbors-in-high-dimensional-data). – macco Jan 21 '17 at 22:34

3 Answers3

2

You don't need to compare each pair of neighbors, you need to compute a set of neighbors, e.g. on a circle (sphere/ hypersphere in a higher dimensions) with a radius of delta, and then take the one with the highest values to "climb up". In any case you will discretize the neighborhood of your current solution and compute the score function for each neighbor. When you can differentiate your function, than, Gradient ascent/descent based algorithms may solve your problem: 1) Compute the gradient (direction of steepest ascent) 2) Go a small step into the direction of the gradient 3) Stop if solution does not change

A common problem with those algorithms is, that you often only find local maxima / minima. You can find a great overview on gradient descent/ascent algorithms here: http://sebastianruder.com/optimizing-gradient-descent/

  • This does not explain what to consider as neighbors in multidimensional space. For example, should there be only 2n neighbors in n cardinal directions, or 2^n neighbors for a {-delta, +delta}^n hypercube as suggested in the question, or maybe 3^n - 1 neighbors for {-delta, 0, +delta}^n? – Gassa Jan 21 '17 at 23:37
  • 1
    When dealing with real numbers, we have an infinite number of neighbors to choose from. In the 1D case, given a delta, we have only two neighbors. In 2D, we get already a circle with an infinite number of neigbors on that circle. Of course, we can go in only for directions (2^2 = 4). Maybe discretizing the neighborhood (in 2D a grid, in 3D a voxel grid could be a solution). – Eric Dörheit Jan 24 '17 at 17:06
0

One example of a multidimensional search algorithm which needs only O(n) neighbours instead of O(2^n) neighbours is the Torczon simplex method described in Multidirectional search: A direct search algorithm for parallel machines (1989). I chose this over the more widely known Nelder-Mead method because the Torczon simplex method has a convergence proof (convergence to a local optimum given some reasonable conditions).

Todd West
  • 326
  • 1
  • 8
mcdowella
  • 19,301
  • 2
  • 19
  • 25
0

If you are using IEEE-754 floating point numbers then the obvious answer is something like (2^52*(log_2(delta)+1023))^(n-1)+1 if delta>=2^(-1022) (more or less depending on your search space...) as that is the only way you can be certain that there are no more neighboring solutions with a distance of delta.

Even assuming you instead take a random fixed size sample of all points within a given distance of delta, lets say delta=.1, you would still have the problem that if the distance from the local optimum was .0001 the probability of finding an improvement in just 1 dimension would be less than .0001/.1/2=0.05% so you would need to take more and more random samples as you get closer to the local optimum (of which you don't know the value...).

Obviously hill climbing is not intended for the real number space or theoretical graph spaces with infinite degree. You should instead be using a global search algorithm.