126

How do I take an efficient simple random sample in SQL? The database in question is running MySQL; my table is at least 200,000 rows, and I want a simple random sample of about 10,000.

The "obvious" answer is to:

SELECT * FROM table ORDER BY RAND() LIMIT 10000

For large tables, that's too slow: it calls RAND() for every row (which already puts it at O(n)), and sorts them, making it O(n lg n) at best. Is there a way to do this faster than O(n)?

Note: As Andrew Mao points out in the comments, If you're using this approach on SQL Server, you should use the T-SQL function NEWID(), because RAND() may return the same value for all rows.

EDIT: 5 YEARS LATER

I ran into this problem again with a bigger table, and ended up using a version of @ignorant's solution, with two tweaks:

  • Sample the rows to 2-5x my desired sample size, to cheaply ORDER BY RAND()
  • Save the result of RAND() to an indexed column on every insert/update. (If your data set isn't very update-heavy, you may need to find another way to keep this column fresh.)

To take a 1000-item sample of a table, I count the rows and sample the result down to, on average, 10,000 rows with the the frozen_rand column:

SELECT COUNT(*) FROM table; -- Use this to determine rand_low and rand_high

  SELECT *
    FROM table
   WHERE frozen_rand BETWEEN %(rand_low)s AND %(rand_high)s
ORDER BY RAND() LIMIT 1000

(My actual implementation involves more work to make sure I don't undersample, and to manually wrap rand_high around, but the basic idea is "randomly cut your N down to a few thousand.")

While this makes some sacrifices, it allows me to sample the database down using an index scan, until it's small enough to ORDER BY RAND() again.

baxx
  • 3,956
  • 6
  • 37
  • 75
ojrac
  • 13,231
  • 6
  • 37
  • 39
  • 4
    That doesn't even work in SQL server because `RAND()` returns the same value every subsequent call. – Andrew Mao Sep 20 '12 at 16:43
  • 1
    Good point -- I'll add a note that SQL Server users should use ORDER BY NEWID() instead. – ojrac Sep 20 '12 at 19:14
  • It still is terribly inefficient because it has to sort all the data. A random sampling technique for some percentage is better, but I even after reading a bunch of posts on here, I haven't found an acceptable solution that is sufficiently random. – Andrew Mao Sep 20 '12 at 21:11
  • If you read the question, I am asking specifically because ORDER BY RAND() is O(n lg n). – ojrac Sep 27 '12 at 02:25
  • muposat's answer below is great if you're not too obsessed with the statistical randomness of RAND(). – Josh Greifer Nov 18 '14 at 10:14

12 Answers12

84

I think the fastest solution is

select * from table where rand() <= .3

Here is why I think this should do the job.

  • It will create a random number for each row. The number is between 0 and 1
  • It evaluates whether to display that row if the number generated is between 0 and .3 (30%).

This assumes that rand() is generating numbers in a uniform distribution. It is the quickest way to do this.

I saw that someone had recommended that solution and they got shot down without proof.. here is what I would say to that -

  • This is O(n) but no sorting is required so it is faster than the O(n lg n)
  • mysql is very capable of generating random numbers for each row. Try this -

    select rand() from INFORMATION_SCHEMA.TABLES limit 10;

Since the database in question is mySQL, this is the right solution.

ignorant
  • 1,390
  • 1
  • 10
  • 14
  • 1
    First, you have the problem that this doesn't really answer the question, since it gets a semi-random number of results returned, close to a desired number but not necessarily exactly that number, instead of a precise desired number of results. – user12861 Feb 07 '13 at 15:37
  • 1
    Next, as to efficiency, yours is O(n), where n is the number of rows in the table. That's not nearly as good as O(m log m), where m is the number of results you want, and m << n. You could still be right that it would be faster in practice, because as you say generating rand()s and comparing them to a constant COULD be very fast. You'd have to test it to find out. With smaller tables you may win. With huge tables and a much smaller number of desired results I doubt it. – user12861 Feb 07 '13 at 15:40
  • 2
    While @user12861 is right about this not getting the exact right number, it's a good way to cut the data set down to the right rough size. – ojrac Feb 08 '13 at 19:08
  • 1
    How does the database service the following query - `SELECT * FROM table ORDER BY RAND() LIMIT 10000 ` ? It has to first create a random number for each row (same as the solution I described), then order it.. sorts are expensive! This is why this solution WILL be slower than the one I described, as no sorts are required. You can add a limit to the solution I described and it will not give you more than that number of rows. As someone correctly pointed out, it won't give you EXACT sample size, but with random samples, EXACT is most often not a strict requirement. – ignorant Apr 03 '13 at 21:28
  • Is there a way to specify minimum number of rows? – CMCDragonkai Mar 15 '14 at 23:18
  • The problem with randomness is that it is a probability. So if you wanted 30% rows of a 100k table, you could specify .3 as the random threshold and then limit 30k, and that will usually work. However you could end up with 25k rows, or 40k rows in different runs as it is a random distribution. You could increase the likelihood of getting exactly 30k rows by specifying .4 as the random threshold and limit 30k, but at the end you can only increase the likelihood, not absolute numbers. The higher you ask for, the more likely you are to get your minimum set of rows, but its not exactly right. – ignorant Mar 17 '14 at 17:29
  • This assumes that `rand()` is generating numbers with a *uniform*, not normal, distribution. – augurar Nov 20 '14 at 23:37
  • Thanks for pointing that out @augurar. I have updated the answer. MYSQL is not truly uniform but "close", see [this](http://stackoverflow.com/questions/20565732/distribution-of-rand-in-mysql) – ignorant Nov 24 '14 at 16:11
  • It's not random. It will artificially favour rows earlier in the table as long as you specify a constant that will give you the required number of rows – symcbean May 09 '16 at 20:43
  • That's not correct.. if you sample every 5th row out of 100 rows, you will end up with 20 rows from different time scales.. will they be the same 20 rows each time? depends on the database, no guarantees on row order exist in principle... anyway, if you notice, there is no LIMIT in the answer. – ignorant May 09 '16 at 22:42
28

There's a very interesting discussion of this type of issue here: http://www.titov.net/2005/09/21/do-not-use-order-by-rand-or-how-to-get-random-rows-from-table/

I think with absolutely no assumptions about the table that your O(n lg n) solution is the best. Though actually with a good optimizer or a slightly different technique the query you list may be a bit better, O(m*n) where m is the number of random rows desired, as it wouldn't necesssarily have to sort the whole large array, it could just search for the smallest m times. But for the sort of numbers you posted, m is bigger than lg n anyway.

Three asumptions we might try out:

  1. there is a unique, indexed, primary key in the table

  2. the number of random rows you want to select (m) is much smaller than the number of rows in the table (n)

  3. the unique primary key is an integer that ranges from 1 to n with no gaps

With only assumptions 1 and 2 I think this can be done in O(n), though you'll need to write a whole index to the table to match assumption 3, so it's not necesarily a fast O(n). If we can ADDITIONALLY assume something else nice about the table, we can do the task in O(m log m). Assumption 3 would be an easy nice additional property to work with. With a nice random number generator that guaranteed no duplicates when generating m numbers in a row, an O(m) solution would be possible.

Given the three assumptions, the basic idea is to generate m unique random numbers between 1 and n, and then select the rows with those keys from the table. I don't have mysql or anything in front of me right now, so in slightly pseudocode this would look something like:


create table RandomKeys (RandomKey int)
create table RandomKeysAttempt (RandomKey int)

-- generate m random keys between 1 and n
for i = 1 to m
  insert RandomKeysAttempt select rand()*n + 1

-- eliminate duplicates
insert RandomKeys select distinct RandomKey from RandomKeysAttempt

-- as long as we don't have enough, keep generating new keys,
-- with luck (and m much less than n), this won't be necessary
while count(RandomKeys) &lt m
  NextAttempt = rand()*n + 1
  if not exists (select * from RandomKeys where RandomKey = NextAttempt)
    insert RandomKeys select NextAttempt

-- get our random rows
select *
from RandomKeys r
join table t ON r.RandomKey = t.UniqueKey

If you were really concerned about efficiency, you might consider doing the random key generation in some sort of procedural language and inserting the results in the database, as almost anything other than SQL would probably be better at the sort of looping and random number generation required.

user12861
  • 2,358
  • 4
  • 23
  • 41
  • I would recommend adding a unique index on the random key selection and perhaps ignoring duplicates on the insert, then you can get rid of the distinct stuff and the join will be faster. – Sam Saffron Oct 31 '08 at 07:08
  • I think the random number algorithm could use some tweaks -- either a UNIQUE constraint as mentioned, or just generate 2*m numbers, and SELECT DISTINCT, ORDER BY id (first-come-first-serve, so this reduces to the UNIQUE constraint) LIMIT m. I like it. – ojrac Oct 31 '08 at 15:15
  • As to adding a unique index to the random key selection and then ignoring duplicates on insert, I thought this may get you back to O(m^2) behavior instead of O(m lg m) for a sort. Not sure how efficient the server is maintaining the index when inserting random rows one at a time. – user12861 Oct 31 '08 at 16:02
  • As to suggestions to generate 2*m numbers or something, I wanted an algorithm guaranteed to work no matter what. There's always the (slim) chance that your 2*m random numbers will have more than m duplicates, so you won't have enough for your query. – user12861 Oct 31 '08 at 16:05
  • As long as you pay attention to the birthday paradox, you can easily generate a quantity of random numbers with an astronomically low chance of – ojrac Nov 01 '08 at 17:10
  • The way I suggested, since the chance of duplicates would be astronomically low anyway, I just generate one new one at a time if necessary. Very unlikely we even need one more. – user12861 Nov 02 '08 at 02:25
  • 1
    How do you get the number of rows in the table? – Awesome-o Feb 24 '14 at 05:11
10

Faster Than ORDER BY RAND()

I tested this method to be much faster than ORDER BY RAND(), hence it runs in O(n) time, and does so impressively fast.

From http://technet.microsoft.com/en-us/library/ms189108%28v=sql.105%29.aspx:

Non-MSSQL version -- I did not test this

SELECT * FROM Sales.SalesOrderDetail
WHERE 0.01 >= RAND()

MSSQL version:

SELECT * FROM Sales.SalesOrderDetail
WHERE 0.01 >= CAST(CHECKSUM(NEWID(), SalesOrderID) & 0x7fffffff AS float) / CAST (0x7fffffff AS int)

This will select ~1% of records. So if you need exact # of percents or records to be selected, estimate your percentage with some safety margin, then randomly pluck excess records from resulting set, using the more expensive ORDER BY RAND() method.

Even Faster

I was able to improve upon this method even further because I had a well-known indexed column value range.

For example, if you have an indexed column with uniformly distributed integers [0..max], you can use that to randomly select N small intervals. Do this dynamically in your program to get a different set for each query run. This subset selection will be O(N), which can many orders of magnitude smaller than your full data set.

In my test I reduced the time needed to get 20 (out 20 mil) sample records from 3 mins using ORDER BY RAND() down to 0.0 seconds!

Community
  • 1
  • 1
Muposat
  • 1,476
  • 1
  • 11
  • 24
6

Apparently in some versions of SQL there's a TABLESAMPLE command, but it's not in all SQL implementations (notably, Redshift).

http://technet.microsoft.com/en-us/library/ms189108(v=sql.105).aspx

gatoatigrado
  • 16,580
  • 18
  • 81
  • 143
  • Very cool! It looks like it's not implemented by PostgreSQL or MySQL/MariaDB either, but it's a great answer if you're on a SQL implementation that supports it. – ojrac May 01 '14 at 18:53
  • I understand that `TABLESAMPLE`is not random in the statistical sense. – Sean May 04 '17 at 11:42
5

Just use

WHERE RAND() < 0.1 

to get 10% of the records or

WHERE RAND() < 0.01 

to get 1% of the records, etc.

Lukas Eder
  • 211,314
  • 129
  • 689
  • 1,509
David F Mayer
  • 89
  • 1
  • 1
  • 1
    That will call RAND for every row, making it O(n). The poster was looking for something better than that. – user12861 May 21 '12 at 15:23
  • 1
    Not only that, but `RAND()` returns the same value for subsequent calls (at least on MSSQL), meaning you will get either the whole table or none of it with that probability. – Andrew Mao Sep 19 '12 at 20:51
2

In certain dialects like Microsoft SQL Server, PostgreSQL, and Oracle (but not MySQL or SQLite), you can do something like

select distinct top 10000 customer_id from nielsen.dbo.customer TABLESAMPLE (20000 rows) REPEATABLE (123);

The reason for not just doing (10000 rows) without the top is that the TABLESAMPLE logic gives you an extremely inexact number of rows (like sometimes 75% that, sometimes 1.25% times that), so you want to oversample and select the exact number you want. The REPEATABLE (123) is for providing a random seed.

Zhanwen Chen
  • 1,295
  • 17
  • 21
  • 1
    This looks like a potentially efficient version of the top answer (filtering by `RAND()`). There are some traps (the most efficient implementations sample based on storage layout, which might not be random enough for some applications), but this is a great tool to have. – ojrac Dec 18 '20 at 15:27
1

I want to point out that all of these solutions appear to sample without replacement. Selecting the top K rows from a random sort or joining to a table that contains unique keys in random order will yield a random sample generated without replacement.

If you want your sample to be independent, you'll need to sample with replacement. See Question 25451034 for one example of how to do this using a JOIN in a manner similar to user12861's solution. The solution is written for T-SQL, but the concept works in any SQL db.

Community
  • 1
  • 1
gazzman
  • 81
  • 6
1

Try

SELECT TOP 10000 * FROM table ORDER BY NEWID()

Would this give the desired results, without being too over complicated?

Northernlad
  • 177
  • 1
  • 9
  • 2
    Note that `NEWID()` is specific to T-SQL. – Peter O. Oct 15 '20 at 20:57
  • My apologies. It is. Thanks It is however useful to know if anyone comes here looking as I did on a better way, and IS using T-SQL – Northernlad Oct 16 '20 at 14:36
  • 1
    `ORDER BY NEWID()` is functionally the same as `ORDER BY RAND()` -- it calls `RAND()` for every row in the set -- O(n) -- and then sorts the entire thing -- O(n lg n). In other words, that is the worst case solution that this question is looking to improve on. – ojrac Oct 16 '20 at 18:04
0

Starting with the observation that we can retrieve the ids of a table (eg. count 5) based on a set:

select *
from table_name
where _id in (4, 1, 2, 5, 3)

we can come to the result that if we could generate the string "(4, 1, 2, 5, 3)", then we would have a more efficient way than RAND().

For example, in Java:

ArrayList<Integer> indices = new ArrayList<Integer>(rowsCount);
for (int i = 0; i < rowsCount; i++) {
    indices.add(i);
}
Collections.shuffle(indices);
String inClause = indices.toString().replace('[', '(').replace(']', ')');

If ids have gaps, then the initial arraylist indices is the result of an sql query on ids.

KitKat
  • 1,495
  • 14
  • 15
0

If you need exactly m rows, realistically you'll generate your subset of IDs outside of SQL. Most methods require at some point to select the "nth" entry, and SQL tables are really not arrays at all. The assumption that the keys are consecutive in order to just join random ints between 1 and the count is also difficult to satisfy — MySQL for example doesn't support it natively, and the lock conditions are... tricky.

Here's an O(max(n, m lg n))-time, O(n)-space solution assuming just plain BTREE keys:

  1. Fetch all values of the key column of the data table in any order into an array in your favorite scripting language in O(n)
  2. Perform a Fisher-Yates shuffle, stopping after m swaps, and extract the subarray [0:m-1] in ϴ(m)
  3. "Join" the subarray with the original dataset (e.g. SELECT ... WHERE id IN (<subarray>)) in O(m lg n)

Any method that generates the random subset outside of SQL must have at least this complexity. The join can't be any faster than O(m lg n) with BTREE (so O(m) claims are fantasy for most engines) and the shuffle is bounded below n and m lg n and doesn't affect the asymptotic behavior.

In Pythonic pseudocode:

ids = sql.query('SELECT id FROM t')
for i in range(m):
  r = int(random() * (len(ids) - i))
  ids[i], ids[i + r] = ids[i + r], ids[i]

results = sql.query('SELECT * FROM t WHERE id IN (%s)' % ', '.join(ids[0:m-1])
concat
  • 3,107
  • 16
  • 30
0

Select 3000 random records in Netezza:

WITH IDS AS (
     SELECT ID
     FROM MYTABLE;
)

SELECT ID FROM IDS ORDER BY mt_random() LIMIT 3000
  • Other than adding some SQL dialect-specific notes, I don't think this answers the question of how to query a random sample of rows without 'ORDER BY rand() LIMIT $1'. – ojrac Mar 03 '20 at 14:28
-4

Maybe you could do

SELECT * FROM table LIMIT 10000 OFFSET FLOOR(RAND() * 190000)
staticsan
  • 29,935
  • 4
  • 60
  • 73
  • 1
    It looks like that would select a random slice of my data; I'm looking for something a little more complicated -- 10,000 randomly-distributed rows. – ojrac Oct 30 '08 at 05:35
  • Then your only option, if you want to do it in the database, is ORDER BY rand(). – staticsan Nov 03 '08 at 00:29