6

I am looking for a performant algorithm to match a large number of people by location, gender and age according to this data structure:

  • Longitude (denotes the persons location)
  • Latitude (denotes the persons location)
  • Gender (denotes the persons gender)
  • Birthdate (denotes the persons birthdate)
  • LookingForGender (denotes what the gender the person is looking for)
  • LookingForMinAge (denotes what minimum age the person is looking for)
  • LookingForMaxAge (denotes what maximum age the person is looking for)
  • LookingForRadius (denotes what maximum distance the person is looking for)
  • Processed (denotes which other persons this person has already processed)

For any person P the algorithm should return candidates C for which applies:

  • Gender of C must be equal P.LookingForGender
  • Gender of P must be equal C.LookingForGender
  • Birthdate of C must be between P.LookingForMinAge and P.LookingForMaxAge
  • Birthdate of P must be between C.LookingForMinAge and C.LookingForMaxAge
  • Lat/Long distance between P and C must be smaller or equal P.LookingForRadius
  • Lat/Long distance between P and C must be smaller or equal C.LookingForRadius
  • Processed of P must not contain C

The algorithm should return the first 100 candidates C in order of distance (Lat/Long). The algorithm should be optimized for both search and updates because people may change their location often.

My current thinking is that k-d tree could be more suitable than locality-sensitive-hashing for these needs and that I should go into this direction.

What would be your advise for me? What should I look for? What risks do you see?

Thanks!

Update:

  • Do I prefer to sacrifice space complexity for better time complexity? Yes I prefer to sacrifice space complexity. However I prefer to have a O(log n) solution that I actually understand and can maintain rather than a O(1) solution that I cannot grasp :)
  • Does the data fit into main memory? No it does not. The data will be distributed across different nodes of a distributed document database (Azure Cosmos DB SQL API).
  • Do you want exact results or are approximate results? Approximate results are OK however age/gender should be filtered exact.
  • Added "Processed" to the algorithm, sorry for having missed that!
  • How often do people change their location? Users will change their location whenever they start the app and look for candidates. Daily active users will therefore change their location one or multiple times a day. A location change may however be minor so just a few kilometers. From 100 app downloads, 15 users will use the app once or more a a month, and 3 users will use it once or more daily.
driAn
  • 3,245
  • 4
  • 41
  • 57
  • Would you prefer to sacrifice space complexity for better time complexity? Or the opposite? – smeso Jul 11 '18 at 10:05
  • Some more questions: Does the data fit into main memory? Do you want exact results or are approximate results (e.g., not the 100 closest candidates but 100 of the 150 closest candidates) ok? – SaiBot Jul 11 '18 at 10:22
  • @smeso I updated the question, thanks for asking! – driAn Jul 11 '18 at 10:53
  • @SaiBot: Thanks - I updated the question and added some more information. – driAn Jul 11 '18 at 10:54
  • 1
    Short answer:use a GIS-enabled database. – wildplasser Jul 11 '18 at 11:14
  • I don't believe LSH is suitable for your heterogeneous data dimensions, some of which are discrete. – ypnos Jul 11 '18 at 11:41
  • imho distibuted databases and spatial indexes do not go well together (Except if each node contains users of a region that may not me matched to users of other regions). If you are really sure that you need a distributed database then what about a naive but parallelized solution (e.g., with Apache Spark )? – SaiBot Jul 11 '18 at 11:44
  • Note that using 8 Byte for each of the attributes and adding an id you will need 80 Byte per user. This allows for storing 100 million users in 8GB Ram. I don't think there are enough singles on earth to not fit on one machine ;-) – SaiBot Jul 11 '18 at 11:54
  • You said that you want to optimize for updates as well as for search because "people may change their location often." How do you define "often?" I'd be really surprised if each week you had 10% of your users change locations. You probably don't want to rebuild the entire structure if a user changes locations, but an O(n) update probably wouldn't hurt at all. – Jim Mischel Jul 11 '18 at 14:41
  • @SaiBot There are about 100 million unmarried adults in the U.S. The U.S. makes up about 4% of the world's population. So, rough estimate, there are between 2 and 2.5 billion unmarried adults in the world. I suppose, if you have a machine with 200 GB of RAM . . . (Sorry, I had to do the math.) – Jim Mischel Jul 11 '18 at 14:46
  • @SaiBot I'm interested in your comment about spatial indexes and distributed databases. I'm in the early stages of a project for which I was planning on using something like that. Do you have some references, or a writeup of your experiences? – Jim Mischel Jul 11 '18 at 14:47
  • Thanks Jim, I added more Information about how often users change location. SaiBot, I think it will not fit into RAM if you consider the "processed" attribute - unless you want to filter this as a post-step. I favour distributed systems over clusters but nevertheless I will check on Apache Spark to see what it offers. – driAn Jul 11 '18 at 15:26
  • 1
    @Jim Mischel no unfortunately I have no writeup about it. Just in my experience: Research wise I have not come across a paper that has a convincing solution to the problem. The main issue is that spatial indexes are very efficient but not good parallelizable (just like binary search for example). Don't get me wrong, you can distribute spatial indexes across n machines and will be faster than naive approaches. But you cannot expect a factor n improvement. In most applications (as in OP) you have more than just a spatial search to do, which favors big data frameworks. – SaiBot Jul 13 '18 at 07:05
  • @JimMischel thanks for the math;) In the second sentence of my comment I was not referring to RAM but to a machine with secondary storage (i.e., SSDs which can greatly benefit spatial indexes over HDDs). – SaiBot Jul 13 '18 at 07:12

1 Answers1

1

Here is some info by Microsoft how to use their spatial indexing ('spatial' is the keyword you want to search for).

The query you are looking for is a k-nearest neighbor query (kNN Search) with k=100.

If you want to serialize the index yourself, have a look at R+tree or R*trees, they are quite good for page based serialization. There are lots of open source example for these trees. Here is my own implementation in Java, unfortunately it does not support Serialization.

About the other indexes:

  • I have no experience with LHS, so it can't say much about it. One thing I know though, since it's internally a HashMap, you need to take special care to make it scalable with large amounts of data. This definitely increases complexity. Another problem, I'm not sure LSH is good for kNN search, you will have to look that up.
  • KD-trees are very simply and should to the job, but are bad for serialization and can have large memory overhead unless you implement a version that can have more than one entry in each node. KD-Trees can also degenerate when updated often, so they may need rebalancing.
  • Otherwise I would suggest quadtrees, for example the qthypercube2. They are also quite simple, very fast in memory, and very well suited for frequent updates, especially if the entries move only a small distance.
TilmannZ
  • 1,784
  • 11
  • 18