7

I'm trying to understand the difference between these two algorithms and how they differ in solving a problem. I have looked at the algorithms and the internals of them. It would be good to hear from others who already experienced with them. Specially, I would like to know how they would behave differently on the same problem.

Thank you.

PRCube
  • 566
  • 2
  • 6
  • 19

2 Answers2

14

Difference

The main difference between the two is the direction in which they move to reach the local minima (or maxima).

  • In Hill Climbing we move only one element of the vector space, we then calculate the value of function and replace it if the value improves. we keep on changing one element of the vector till we can't move in a direction such that position improves. In 3D sapce the move can be visualised as moving in any one of the axial direction along x,y or z axis.
  • In Gradient Descent we take steps in the direction of negative gradient of current point to reach the point of minima (positive in case of maxima). For eg, in 3D Space the direction need not to be an axial direction.
radbrawler
  • 2,391
  • 2
  • 15
  • 22
  • 2
    If the function is continuous and differentiable shall we expect a different result? Or would hill climbing be more adapted to a search space which arguments are discrete? – Tarik Dec 09 '17 at 13:55
3

In addition to radbrawler's answer, they are however similar in the greedy approach that both use to find local minima/maxima. I may consider Gradient descent as the continuous version of the discrete Hill climbing technique.

Ahmad Hassanat
  • 425
  • 4
  • 6
  • No, gradient ascent/descent cannot be considered a continuous analog of discrete Hill climbing. An expectation-maximization algorithm can such as Baum-Welch for Hidden Markov Models. Hill climbing will converge to the nearest local maxima whereas gradient ascent will not necessarily. – Dylan Solms Mar 10 '20 at 21:51