The problem computation scales with the dimensionality. At about 4 you're usually better off with brute force.
If there's some known functionality for this data you can cut down on things. Like if you do this a lot but the points don't change much. You can build the grouping by checking each point for the farlest point each time you add a new point caching the data from the brute force. You'd get O(N) on insertion and O(N) on farlest query. But, you'd need to do that N times giving you O(N^2).
You could reduce this a bit if you also clustered the data. So you define a cluster of points during the insertion and you can determine that since your house is in New York that no house in Paris can be further since you've compared it to a house in Australia. You can do that because you have the data in clusters. But, that's not going to save you that much because in 4D things get really hard to optimize because you end up needing way more boxes to store the clusters in 4D and most of the fun optimization is a proof that since you already exceed that distance in 4D you can rule out all the other points. That's great in 2D but those tricks become progressively messier with new dimensions.