I am developing an online service, not similar to Tinder at all, however their algorithm would help me understand how to scale well.
My assumption is that each of their users has a skipped
set, of users that have been visited before. I don't know exactly what they use, it may not be redis
but there is this page that kind of explains an implementation in redis (not essential but informative).
I'm assuming a user is selected at random from users
, requested from the server (lets assume we don't care about age, gender, etc) just people from your location. And then they check whether this user is in the skipped
set.
Now what happens if someone manually or runs a bot to skip every single user. Doesn't this process begin to lag as the size of skipped
nears the size of users
. Soon enough every check will have already been skipped
and it will get into a loop, just checking for a new random user every time and all have been seen.
How do they keep this process fast? There must be something more than just a limit on how many skipped
people you have, because perhaps you skip a lot of people that never come back online and your skipped
is bigger in size than the amount of people from users
.