0

I have the following code, which uses gradient descent to find the global minimum of y = (x+5)^2:

cur_x = 3                  # the algorithm starts at x=3
rate = 0.01                # learning rate
precision = 0.000001       # this tells us when to stop the algorithm
previous_step_size = 1 
max_iters = 10000          # maximum number of iterations
iters = 0                  # iteration counter
df = lambda x: 2*(x+5)     # gradient of our function

while previous_step_size > precision and iters < max_iters:
    prev_x = cur_x                                # store current x value in prev_x
    cur_x = cur_x - rate * df(prev_x)             # grad descent
    previous_step_size = abs(cur_x - prev_x)      # change in x
    iters = iters+1                               # iteration count
    print("Iteration",iters,"\nX value is",cur_x) # print iterations
    
print("The local minimum occurs at", cur_x)

The procedure is fairly simple, and among the most intuitive and brief for solving such a problem (at least, that I'm aware of).

I'd now like to apply this to solving a system of nonlinear equations. Namely, I want to use this to solve the Time Difference of Arrival problem in three dimensions. That is, given the coordinates of 4 observers (or, in general, n+1 observers for an n dimensional solution), the velocity v of some signal, and the time of arrival at each observer, I want to reconstruct the source (determine it's coordinates [x,y,z].

I've already accomplished this using approximation search (see this excellent post on the matter: ), and I'd now like to try doing so with gradient descent (really, just as an interesting exercise). I know that the problem in two dimensions can be described by the following non-linear system:

sqrt{(x-x_1)^2+(y-y_1)^2}+s(t_2-t_1) = sqrt{(x-x_2)^2 + (y-y_2)^2}

sqrt{(x-x_2)^2+(y-y_2)^2}+s(t_3-t_2) = sqrt{(x-x_3)^2 + (y-y_3)^2}

sqrt{(x-x_3)^2+(y-y_3)^2}+s(t_1-t_3) = sqrt{(x-x_1)^2 + (y-y_1)^2}

I know that it can be done, however I cannot determine how.

How might I go about applying this to 3-dimensions, or some nonlinear system in general?

10GeV
  • 453
  • 2
  • 14
  • I would suggest to use Pytorch to do the automatic differentiation of your cost function. Just implement your cost function in Pytorch and it will compute the gradients and update the parameters automatically (loss.backward() and optimizer.step() calls respectively). – Ismael EL ATIFI Nov 07 '20 at 01:03
  • Was this code not helpful? https://stackoverflow.com/a/63814195/2144669 – David Eisenstat Nov 07 '20 at 13:51
  • @DavidEisenstat Apologies, somehow I completely missed your answer! Yes, that is very helpful. Perhaps you'd like to post your answer here so I can give attribution. – 10GeV Nov 07 '20 at 16:35

0 Answers0