20

To get a single random record from the db, I'm currently doing:

User.all.sample

But when there are 100000+ users, it takes a few seconds to load them all, just to select one.

What's the simplest way to get load a single random user from db?

Mirror318
  • 11,875
  • 14
  • 64
  • 106

8 Answers8

32

You can try following database independent query:

User.find(User.pluck(:id).sample)
[DEBUG]  (36.5ms)  SELECT `users`.`id` FROM `users`
[DEBUG] User Load (0.5ms)  SELECT  `users`.* FROM `users` WHERE `users`.`id` = 58229 LIMIT 1

this one fires two queries but this one is performance efficient as it took only 37ms to get single random user record.

whereas the following query will take around 624.7ms

User.order("RAND()").first
[DEBUG] User Load (624.7ms)  SELECT  `users`.* FROM `users`  ORDER BY RAND() LIMIT 1

I have checked this for 105510 user records.

Ganesh
  • 1,924
  • 1
  • 18
  • 31
  • 2
    `User.find(User.pluck(:id).sample)` is great, not DB-type dependent. If anyone needs more than one record, you can add an argument to sample, like `User.find(User.pluck(:id).sample(4))` – Mirror318 Mar 28 '18 at 05:49
  • 2
    I don't recommend any of this. The first solution will consume more and more memory depending on users count. The second solution can be really slow and also consume significant resources. There many alternatives depending on what kind of random you **really** need. Actually I recommend to avoid randomization as much as possible. – Alex Tatarnikov Mar 28 '18 at 09:26
  • 1
    what are other efficient alternatives. please post them as answer so others will also came to know the solution. – Ganesh Mar 28 '18 at 09:35
  • 2
    @AlexTatarnikov I need a random record from the db, no special kind of random, just a random record. If you have a more efficient solution please share – Mirror318 Mar 28 '18 at 20:51
  • 2
    @GaneshNavale You will load array with all ids to memory. Imagine you have thousands of hundreds ids. – Alex Tatarnikov Mar 28 '18 at 20:55
  • @Mirror318 This is one of them https://stackoverflow.com/a/5297523/3702804 – Alex Tatarnikov Mar 28 '18 at 20:58
  • @AlexTatarnikov Offset is mentioned in another comment, you should post it as an answer here – Mirror318 Mar 28 '18 at 23:25
  • offset mention in another comment will be less efficient, and will take approx **565 ms** – Ganesh Mar 29 '18 at 05:52
  • `User.offset(rand(User.count)).limit(1).take` How does this work for you guys? For me, it took less than 10ms whereas others were taking around 80ms. – arunt May 23 '19 at 09:07
  • Check the first solution on a table containing 25m of records (well, I did, this does not work). The second one works but it takes some time. Fully agree with @AlexTatarnikov – Nick Roz Jan 26 '21 at 15:46
14

Using Postgresql or SQLite, using RANDOM():

User.order("RANDOM()").take

Presumably the same would work for MySQL with RAND()

User.order("RAND()").take
fongfan999
  • 2,565
  • 1
  • 12
  • 21
8

Well after lot of trials and errors i've found this solution to be helpful and error free.

Model.find(Model.ids.sample)

Model.ids will return an array of all ids in the database. we then call sample method on that array that will return a random item in the list.

Sebastián Palma
  • 32,692
  • 6
  • 40
  • 59
Ahmed Khattab
  • 2,645
  • 2
  • 13
  • 19
5

You can find the maximum user id in the table and find a user given a random id limited to this maximum. Example:

max_id = User.order(id: :desc).limit(1).pluck(:id).first
user = User.find_by('id > ?', rand(max_id))

These two queries are extremely fast because you are using the index for the primary key (id).

3

for rails 6

you can pass records count to get how many records you want

User.all.sample(1)

Above Query will Return only one random record of the user

Darshan Thakor
  • 161
  • 1
  • 8
  • 1
    This code will make this request to database: `SELECT "users".* FROM "users"` - fetching all users into memory. – Pavel Chuchuva Nov 29 '22 at 23:21
  • This example has nothing to do with Rails 6, and the question explains the problem with this and is looking for a more efficient alternative. – Archonic Apr 17 '23 at 23:05
0

TLDR: Scroll to the bottom of the answer - strategy #5 is what I use today.


I've actually implemented this in an app, where I randomly need to pull a record out of a growing list of currently 1_000_000 records (for business reasons).

  1. Use DB RANDOM for PostgreSQL/SQLite or RAND for MySQL/MariaDB

    • Pro: Most robust
    • Con: Slow. As the table grows, takes longer and longer (~260ms on my table)
        User.order('RANDOM()').limit(1)
      
  2. Pluck/Select IDs and then use Ruby Array#sample to get a random id. Then look up that record in DB

    • Pro: faster (~50ms on my table)
    • Con: As your tables grows you it will start getting slower, and you simply may run out of resources, depending on the size of the machine. i.e.: you're running on a small VM
        User.find(User.pluck(:id).sample)
      
  3. Get min & man value of IDs in the DB, then rand(min..max) and do a find_by (in case you hit a deleted record) in DB for that record:

    • Pro
      • Extremely fast, if no deleted data (~17ms on my sample data)
      • If the table grows, it doesn't affect the speed of the lookup
    • Con
      • Requires Numeric IDs, can't use GUID or Varchar IDs
      • Needs sequential IDs without missing IDs
      • Can be brittle if any records were deleted and ID's are missing.
        • Error handling & additional lookups can mitigate the problem, but as number of missing IDs becomes larger, this will slow as well
    User.find_by(rand(1..User.count))
    
  4. There is another strategy that I haven't seen other answers mention - Random Offset. Just make sure it's in the range of your existing records.

  • Pro: no error handling needed, doesn't depend on the type of ID
  • Con: Not as fast - about 120ms on avg on my sample data, but depending on where the offset is in the table the actual queries can be anywhere between 20ms and 600ms. That's just how offset works.
    User.offset(rand(1..User.count)).limit(1)
    

Ultimately, I chose 3rd strategy - despite the cons, it is still faster on average for my use-case. Our deletes from that table are also rare, and we use soft deletes, that I can use in my error handling. However, since I came up with Strategy #4 - I've been using that sometimes. Works well.

Tips:

  • If the actual current count is not critical: You can preload User.count and memorize it as User.total_count (for example) or keep in an app-load config, that gets loaded on app restart, and you can either invalidate this caches or restart your app on a regular basis. This will remove the extra 15-20ms needed to get User.count and will speed up option 3 to under 3ms, and option 4 to about 100ms

    def self.total_count
      @total_count ||= count
    end
    

    This is another reason I used stratagem #3. Even hitting "missing" records, and having to error handle it and do additional lookups, until I find a "working" random records - is very cheap. In the time I can look up using other strategies, I can do 5-10 look-ups using this approach and still be ahead of the curve on speed.

  • Error handling can be done 2 ways:

    • Do lookups until you find one that exists
    • Do 3 (5, 10) lookups, and then select a random one from the result set. Still superfast (especially if you memoize User.count as User.total_count or just do User.count, before executing this statement below). In Rails find_by will return result or nil
    • Depending on your data User.maximum(:id) and User.count, maybe the same or close enough to not matter, but they can be also widely apart. So be careful. Do User.maximum(:id) or User.last.id, instead of User.count if you have a lot of gaps in your data. In my case, there are no gaps - so the result is the same.
    max = User.maximum(:id)
    ids = 5.times.map { rand(1..max) }
    User.where(id: ids).sample
    

Error handling should be chosen, depending on how many "missing" records you have in a sequence. If you have a lot, then strategy #3 may not be a good choice.

EDIT - new strategy #5:

Adding 5th strategy. I've come up with after finishing this post, and I think this is the fastest, while being the most robust

  1. Get min & man value of IDs in the DB, then rand(min..max)and do a where lookup:

    • Pro
      • Extremely fast - I get 1-6ms results
      • If the table grows, it doesn't affect the speed of the lookup (actually faster than offset)
      • If records get deleted, it will just find the first match after the deleted one.
      • Can be modified into a scope, and used with other scopes - i.e User.published_today.random_record (Just keep in mind that it's essentially reimplementing an offset in a way, which may limit your data set for a lookup)
    • Con
      • Not as clear as using just offset, but offset will get slow er depending on how big is your data.
    # `random_id` Can be memoized / cached, but beware of scoping before caching!!!
    random_id = rand(User.minumum(:id)..User.maximum(:id)) 
    User.where("id >= ?", random_id).limit(1)
    
konung
  • 6,908
  • 6
  • 54
  • 79
0

try this:

User.offset(0..(rand(User.count))).limit(1).sample

Enjoy it :)

msaioz
  • 1
  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Apr 09 '23 at 02:21
-1

You can get a sample in Rails console using Model.all.sample.attribute.

E.g.:

Contact.all.sample.name
=> "Bob Mcmillan"
Raphael Onofre
  • 304
  • 4
  • 13