132

I would like to construct a query that displays all the results in a table, but is offset by 5 from the start of the table. As far as I can tell, MySQL's LIMIT requires a limit as well as an offset. Is there any way to do this?

informatik01
  • 16,038
  • 10
  • 74
  • 104
stillinbeta
  • 1,977
  • 3
  • 17
  • 23
  • 3
    This is a totally valid question, but I wonder if what would be better is to grab everything and disregard the first few records programmatically. Given the horror of what seems to be the best answer (limit 5, 18446744073709551615), I'd heavily favor working around the limitations of MySQL's LIMIT. – cesoid Jan 31 '15 at 02:31
  • 4
    @cesoid what if you want `limit 5000, 18446744073709551615`. You're not going to fetch an extra 5000 rows just for your code to look pretty. – elipoultorak Oct 26 '15 at 11:03
  • @user3576887 I think you're right, I was just considering the question above with the assumption that 5 was the only requirement, rather than some varying amount that might be much larger (and rather than solving someone else's problem). – cesoid Oct 28 '15 at 14:29
  • I suggest that this is such a rare task that the ugliness of the solution can be accepted. – Rick James May 15 '17 at 23:16

10 Answers10

171

From the MySQL Manual on LIMIT:

To retrieve all rows from a certain offset up to the end of the result set, you can use some large number for the second parameter. This statement retrieves all rows from the 96th row to the last:

SELECT * FROM tbl LIMIT 95, 18446744073709551615;
Michael Berkowski
  • 267,341
  • 46
  • 444
  • 390
Greg
  • 316,276
  • 54
  • 369
  • 333
  • 121
    Awful! I came here hoping that MySQL made the Limit clause optional, as it is, but also with an offset provided... but no! I've seen this 18446744073709551615 scatter all over the code and I was blaming lazy programmers, but it's a design feature! – Petruza May 24 '10 at 15:10
  • 26
    18446744073709551615 is 2^64-1 for those who were wondering. You may want to watch out because you won't be able to store this value in an 32 bit integer. You have to make sure you store this as a string to ensure compatibility. – AlicanC Dec 07 '11 at 22:51
  • 16
    Terrible! they need to get more elegant than that... `Limit -1` or `Limit Null` looks pretty reasonable! or atleast Limit should accept a subquery like `select * from table limit (select count(*) from table)` – vulcan raven Jan 15 '12 at 22:04
  • 23
    use php 'PHP_INT_MAX' to avoid overflow effects. – Karl Adler Apr 07 '14 at 15:11
  • I think this falls under the category of TODINAWYW, which I just made up, and it stands for The Official Documentation is Not Always What You Want. It explains how you would get all records if you had a table that was maxed out at (about) 18 quintillion rows, and also suggests using "some large number" as an alternative, but I would suggest that there is a meaningful, non-infinite, and non-18-quintillion number of records you are willing to get back, and you should use that. (I included this in my own answer.) – cesoid Jul 29 '16 at 17:12
  • There seems to be some special handling in MySQL for 18446744073709551615. It's not accepted as a LIMIT for a DELETE clause with safe mode on, for one example. However 18446744073709551614 is accepted. – stannius May 24 '17 at 18:17
32

As you mentioned it LIMIT is required, so you need to use the biggest limit possible, which is 18446744073709551615 (maximum of unsigned BIGINT)

SELECT * FROM somewhere LIMIT 18446744073709551610 OFFSET 5
Czimi
  • 2,494
  • 17
  • 14
  • 47
    Wow, is this the official solution from MySQL team? – Antony May 23 '11 at 18:11
  • I prefer MySQL over PostgresSQL. Google even prefers it (MariaDB) so ... – Spock Aug 28 '22 at 16:25
  • @Spock Why would anybody prefer MySQL over PostgreSQL? – Rodrigo Dec 11 '22 at 02:16
  • @Rodrigo imho MySQL is better for websites and online transactions and simple structures, while PostgreSQL is better for large and complicated analytical processes (objects etc). And in the end it's a matter of preference and experience. – Spock Dec 12 '22 at 12:37
  • @Spock The cheapest website server can handle PostgreSQL well, so I see no overhead in using it at all. So, why use two different “languages”, when one is enough? Unless, maybe, you’re doing embedded databases, but in this case you’ll need something like SQLite. – Rodrigo Dec 14 '22 at 19:19
  • I use it with a mixture of databases - for live data I use Firebase. But I'll definitely give PostgresSQL a closer look, thanks. I'm just so familiar with MySQL that is' my go-to. – Spock Dec 15 '22 at 20:03
19

As noted in other answers, MySQL suggests using 18446744073709551615 as the number of records in the limit, but consider this: What would you do if you got 18,446,744,073,709,551,615 records back? In fact, what would you do if you got 1,000,000,000 records?

Maybe you do want more than one billion records, but my point is that there is some limit on the number you want, and it is less than 18 quintillion. For the sake of stability, optimization, and possibly usability, I would suggest putting some meaningful limit on the query. This would also reduce confusion for anyone who has never seen that magical looking number, and have the added benefit of communicating at least how many records you are willing to handle at once.

If you really must get all 18 quintillion records from your database, maybe what you really want is to grab them in increments of 100 million and loop 184 billion times.

cesoid
  • 990
  • 10
  • 12
  • 1
    You are right, but keeping this decision to the developer is not a good choice – amd May 21 '17 at 08:04
  • @amd Could you explain that a little more? I don't know what you're trying to say. – cesoid May 22 '17 at 15:49
  • 1
    @cesoid I think he's saying that devs shouldn't be the ones to arbitrarily choose the business logic, which I agree with, but only to a point. Let's say you're returning a list of orders to a customer. It's perfectly reasonable to never return more than, say, a million at a time, but limiting to 100 might cause confusion. – Autumn Leonard Aug 03 '18 at 20:15
  • 1
    @amd I'm not saying that the developer should change the behavior of the app in order to avoid using 18446744073709551615. I'm saying that they should consider whether using that number makes sense as part of the implementation of whatever the client or interface designer has requested, and that it is very unlikely to be the right implementation for anything. The decision to use MySQL was probably already made by the developer without asking whether there would be more than 18 quintillion of something. – cesoid Jul 10 '19 at 13:43
  • 1
    My 2 cents on this.... You can already easily get all records without having to use that magic 18 quintillion number... Consider this query - `select * from table`. So that begs the question, what would you do if you got 18 quintillion records for that query? Are we to modify all our basic queries with a limit like this `select * from table limit 0, 1000000` to avoid getting too many records? I just think that it's bad design on MySQL's part, especially when other database systems like PostgreSQL can have queries like this `select * from myTable offset 10` which I use all the time – Ray Perea Apr 14 '22 at 01:33
  • 1
    @RayPerea I agree that MySQL should be designed to have an offset without an upper limit, and most of the time you probably don't need to limit your queries in that way. I just think that the best option for working around that shortcoming is conveniently pretty good practice anyway, which is to at least briefly consider some upper limit, even if your only safe guess is to make it really high, like 1,000,000,000,000. It's annoying, but the other options seem worse. – cesoid Apr 22 '22 at 11:33
5

Another approach would be to select an autoimcremented column and then filter it using HAVING.

SET @a := 0; 
select @a:=@a + 1 AS counter, table.* FROM table 
HAVING counter > 4

But I would probably stick with the high limit approach.

mgraph
  • 15,238
  • 4
  • 41
  • 75
jishi
  • 24,126
  • 6
  • 49
  • 75
  • thank you, and i wonder how can i put such query in PHP statement! i mean like that way `$sql = 'SET @a :=0 SELECT .....'; ` – Reham Fahmy Aug 17 '18 at 22:57
5

As others mentioned, from the MySQL manual. In order to achieve that, you can use the maximum value of an unsigned big int, that is this awful number (18446744073709551615). But to make it a little bit less messy you can the tilde "~" bitwise operator.

  LIMIT 95, ~0

it works as a bitwise negation. The result of "~0" is 18446744073709551615.

Bruno.S
  • 109
  • 1
  • 7
0

You can use a MySQL statement with LIMIT:

START TRANSACTION;
SET @my_offset = 5;
SET @rows = (SELECT COUNT(*) FROM my_table);
PREPARE statement FROM 'SELECT * FROM my_table LIMIT ? OFFSET ?';
EXECUTE statement USING @rows, @my_offset;
COMMIT;

Tested in MySQL 5.5.44. Thus, we can avoid the insertion of the number 18446744073709551615.

note: the transaction makes sure that the variable @rows is in agreement to the table considered in the execution of statement.

sissi_luaty
  • 2,839
  • 21
  • 28
0

I ran into a very similar issue when practicing LC#1321, in which I have to select all the dates but the first 6 dates are skipped.

I achieved this in MySQL with the help of ROW_NUMBER() window function and subquery. For example, the following query returns all the results with the first five rows skipped:

SELECT
    fieldname1,
    fieldname2
FROM(
    SELECT
        *,
        ROW_NUMBER() OVER() row_num
    FROM
        mytable
) tmp
WHERE
    row_num > 5;

You may need to add some more logics in the subquery, especially in OVER() to fit your need. In addition, RANK()/DENSE_RANK() window functions may be used instead of ROW_NUMBER() depending on your real offset logic.

Reference:

MySQL 8.0 Reference Manual - ROW_NUMBER()

fishstick
  • 2,144
  • 1
  • 19
  • 14
-1

Just today I was reading about the best way to get huge amounts of data (more than a million rows) from a mysql table. One way is, as suggested, using LIMIT x,y where x is the offset and y the last row you want returned. However, as I found out, it isn't the most efficient way to do so. If you have an autoincrement column, you can as easily use a SELECT statement with a WHERE clause saying from which record you'd like to start.

For example, SELECT * FROM table_name WHERE id > x;

It seems that mysql gets all results when you use LIMIT and then only shows you the records that fit in the offset: not the best for performance.

Source: Answer to this question MySQL Forums. Just take note, the question is about 6 years old.

fed
  • 420
  • 1
  • 4
  • 8
  • 15
    This will give incorrect results if you've ever deleted a record. This method is especially dangerous, because it works most of the time, and fails silently when it doesn't. – octern Apr 15 '14 at 18:48
-2

I know that this is old but I didnt see a similar response so this is the solution I would use.

First, I would execute a count query on the table to see how many records exist. This query is fast and normally the execution time is negligible. Something like:

SELECT COUNT(*) FROM table_name;

Then I would build my query using the result I got from count as my limit (since that is the maximum number of rows the table could possibly return). Something like:

SELECT * FROM table_name LIMIT count_result OFFSET desired_offset;

Or possibly something like:

SELECT * FROM table_name LIMIT desired_offset, count_result;

Of course, if necessary, you could subtract desired_offset from count_result to get an actual, accurate value to supply as the limit. Passing the "18446744073709551610" value just doesnt make sense if I can actually determine an appropriate limit to provide.

-7
WHERE .... AND id > <YOUROFFSET>

id can be any autoincremented or unique numerical column you have...