2

I am trying to find a way to get a random selection from a large dataset.

We expect the set to grow to ~500K records, so it is important to find a way that keeps performing well while the set grows.

I tried a technique from: http://forums.mysql.com/read.php?24,163940,262235#msg-262235 But it's not exactly random and it doesn't play well with a LIMIT clause, you don't always get the number of records that you want.

So I thought, since the PK is auto_increment, I just generate a list of random id's and use an IN clause to select the rows I want. The problem with that approach is that sometimes I need a random set of data with records having a spefic status, a status that is found in at most 5% of the total set. To make that work I would first need to find out what ID's I can use that have that specific status, so that's not going to work either.

I am using mysql 5.1.46, MyISAM storage engine.
It might be important to know that the query to select the random rows is going to be run very often and the table it is selecting from is appended to frequently.

Any help would be greatly appreciated!

Dennis Haarbrink
  • 3,738
  • 1
  • 27
  • 54

2 Answers2

2

You could solve this with some denormalization:

  • Build a secondary table that contains the same pkeys and statuses as your data table
  • Add and populate a status group column which will be a kind of sub-pkey that you auto number yourself (1-based autoincrement relative to a single status)
Pkey    Status    StatusPkey
1       A         1
2       A         2
3       B         1
4       B         2
5       C         1
...     C         ...
n       C         m (where m = # of C statuses)

When you don't need to filter you can generate rand #s on the pkey as you mentioned above. When you do need to filter then generate rands against the StatusPkeys of the particular status you're interested in.

There are several ways to build this table. You could have a procedure that you run on an interval or you could do it live. The latter would be a performance hit though since the calculating the StatusPkey could get expensive.

Paul Sasik
  • 79,492
  • 20
  • 149
  • 189
  • This approach also came to mind, but thought of it as being too expensive. I didn't realize then that it would occupy only a maximum of 5% of the total (I thought it would be greater). So I think this is the approach I will take. I will create a background process for this though, don't want that performance hit in my frontend. – Dennis Haarbrink Aug 24 '10 at 17:29
  • How about populating (or recreating) the table on an interval? Once a day or once an hour? Or does it need to be near-live? – Paul Sasik Aug 24 '10 at 18:04
  • That's wat I intend to do, I'm going to install a crontab for this purpose. I think I can keep the update frequency fairly high (say, every five minutes) given that I can register when a status changes. So all I have to do is check the 'dirty' flag and repopulate the table! (I might even be able to pull off doing just updates if I keep track of the exact status updates). – Dennis Haarbrink Aug 24 '10 at 18:39
0

You can do this efficiently, but you have to do it in two queries.

First get a random offset scaled by the number of rows that match your 5% conditions:

SELECT ROUND(RAND() * (SELECT COUNT(*) FROM MyTable WHERE ...conditions...))

This returns an integer. Next, use the integer as an offset in a LIMIT expression:

SELECT * FROM MyTable WHERE ...conditions... LIMIT 1 OFFSET ?

Not every problem must be solved in a single SQL query.

Bill Karwin
  • 538,548
  • 86
  • 673
  • 828
  • So what you're saying, is to execute the query twice. Then skew past potentially several hundred thousand rows? I wouldn't call it efficient necessarily... The better way (Using your two query approach) would be adding a `WHERE id >= ? LIMIT 1` to the second query. That way it doesn't need to load the first n rows to find the row you want (it can short circuit that step and apply significant optimizations)... – ircmaxell Aug 24 '10 at 16:59
  • The problem with this approach is that you can only retrieve one record at a time. When you increase your `limit` it's not a random set anymore, it's just the offset that is random. – Dennis Haarbrink Aug 24 '10 at 17:00
  • @ircmaxell: The offset is not an id value. Using `WHERE id >= ?` is non-random, rows that occur after gaps are picked with greater frequency. – Bill Karwin Aug 24 '10 at 18:11
  • @Dennis Haarbrink: Right, this technique is useful for picking one random row at a time. – Bill Karwin Aug 24 '10 at 18:13
  • And what to do if we want to pick several rows? – Oleg Mar 17 '11 at 06:33
  • @user366534: Pick several, one at a time, and repeat if you get a duplicate. Sorry that's kind of a cop-out answer, but no other answer makes sense in the relational model. – Bill Karwin Mar 17 '11 at 09:19