I am experimenting with gradient descent and want to plot a contour of the gradient given independent variables x and y.
The optimization objective is to estimate a point given only a list of points and the distances to each of those points. I have a list of vectors of form [(x_1, y_1, d_1), ..., (x_n, y_n, d_n)]
where d_i
is the measured distance from the point to be estimated to the point (x_i, y_i)
, and I have a function g(x, y)
that returns the gradient at the point (x, y)
. (The function g(x, y)
uses the training vectors to calculate the gradient.)
The gradient descent algorithm works fine and arrives at a close estimate to the actual point coordinates. I want now to visualize the gradient as a contour map. I have the following for x and y values:
xlist = np.linspace(min([v[0] for v in vectors])-1, max([v[0] for v in vectors])+1, 100)
ylist = np.linspace(min([v[1] for v in vectors])-1, max([v[1] for v in vectors])+1, 100)
X, Y = np.meshgrid(xlist, ylist)
But now I need a Z
value that maps each pair of coordinates in the grid mesh to g(x, y)
, and it needs to be the correct shape for the matplotlib contour plot. The examples I have seen have been useless because they all simply multiplied the x and y arrays to generate z values (which obviously will not work in this case), and all the tips, tricks, and SO answers I have encountered ultimately did not help.
How do I use my custom function g(x, y)
to create the 2D Z
array necessary for constructing a valid contour plot?