You need to define "best" to have an answerable question.
What you probably want to do is define some sort of error function for how much being off from a given point matters, and then try to minimize the sum of the errors. The error function to use depends on what your actual problem is. For instance perhaps you want to use (length(point - point_i) - distance)2. That would be least squares. But perhaps you don't care so much about the absolute amount the distances are off, just the ratio between how far they are and how far you expected them to be. So you might use (length(point - point_i)/distance - 1)2. Perhaps you get the points and distances from a bunch of sensors. In that case the appropriate error function to use reflects how much uncertainty there is in your measurement of the distance.
Once you have an appropriate error function chosen, you then need to find a way to optimize it. The simplest way to do that is to calculate a gradient for your error function, and use that to follow a path finding algorithm to the lowest point. If your error function is well behaved, this should work, though not that fast. If you're ambitious you can use the multivariate Newton-Raphson method to find that point. This makes more assumptions about your error function, and will be a lot of work, but will converge much faster.