3

I am developing an algorithm which is based on the hill-climbing algorithm, but with a method to overcome the problem of finding local optimal solutions. Unlike something like simulated annealing, where randomness is introduced into the search location, I try to introduce randomness into my search space. A description of the algorithm follows. My question is: is there a name in the literature for this (type of) algorithm and has it been researched before?

The description

Although I am working on a many dimensional problem, the following is a 2D visual representation of the algorithm

The first three steps of my algorithm, landing in a local optimum First, as above, the hill climbing algorithm is used (In this case to minimize a value). This of course has the problem of entering a local minimum. At this point, I change the evaluation of specific parameters, so my search space changes in shape. My hill-climbing algorithm would then use the scaled (orange+blue) values, while the actual value of a location would be determined by its original (blue) value. So, after scaling: enter image description here In the end we reach a new local optimum (when looking at the blue+orange value). Even though this is not the global optimum when looking at blue+orange, it is (in this example) the global optimum when looking at blue, which is the value we are trying to minimize.

Radical
  • 1,003
  • 1
  • 9
  • 25
  • 1
    Suggestion: snowy hill climbing (or sledging) ;) – BeyelerStudios Feb 08 '17 at 11:23
  • Not 100% sure if you're joking or not (somewhere around 90% maybe), but just in case you're not: I'm not looking for name suggestions, but looking for if there's an established name for the method ;) – Radical Feb 08 '17 at 12:12
  • It was supposed to be a joke. I don't know any algorithms that randomly transform the data and then apply gradient descent, usually those algorithms generate randomness at every step (you can reach the same location multiple times and you'll have randomly transformed - locally - the values multiple times too). I guess, you're not really generating random transformations, so maybe the name would come from how you transform your data? – BeyelerStudios Feb 08 '17 at 14:53
  • You're right, it's not totally random. The local value is based on multiple components, and the transformation is based on virtually increasing one of these components in order to create the illustrated effect with as little effect on the other locations as possible. In essence I was looking for the 'official' name of the method to look into previous research that has been done into it, but from your reaction (and a couple more hours of Googling) I'm starting to conclude that this isn't (yet) a very established approach. – Radical Feb 08 '17 at 15:16
  • Incidentally: "you can reach the same location multiple times and you'll have randomly transformed - locally - the values multiple times too" - the transforming of as little values as possible is based on the idea that this would enable me to actually keep the transformation intact, at least for a couple of iterations, in order to force my solution away from the local minimum it is stuck in. (I don't yet have any hard evidence that this works better than generating randomness at every step, but my experiments so far do seem to support it) – Radical Feb 08 '17 at 15:19
  • There's always pros and cons: transforming only the local data means you'll do it for every step - but since your search is usually along a 1 D path in a n D space, that's good - transforming the whole data set can of course also be done by fixing the transform and applying it at every step, though you'd still transform in every step. – BeyelerStudios Feb 08 '17 at 15:56

0 Answers0