3

I have 29900000 records in table, offset consuming too much query execution time

SELECT * FROM table_records LIMIT 50 OFFSET 1999950
this query taking 33.087 sec

I have changed offset as 2000000

SELECT * FROM table_records LIMIT 50 OFFSET 2000000
this query taking 2.030 sec

explaination

EXPLAIN SELECT * FROM table_records LIMIT 50 OFFSET 29941250


id | select_type | table         | type   | possible_keys | key     | key_len | ref    |rows     | Extra   
1  | SIMPLE      | table_records | index  | (NULL)        | PRIMARY | 4       | (NULL) |29900771 |         

I have removed offset just set as limit

SELECT * FROM table_records LIMIT 50
this query taking 0.002 sec

any suggestion or idea appreciated.

user3151197
  • 347
  • 1
  • 3
  • 14

2 Answers2

2

It's all about caching.

Put simply, OFFSET sucks. It must read and ignore all "offset" rows, then deliver "limit" rows.

When skipping the rows it must fetch the rows -- if they are on disk, this take time; if they are cached in RAM, it is much faster. (Often 10 times as fast.)

What probably happened in your case: The first query found few, if any, of the rows in RAM, so it had to most or all of the 1999950 rows.

Then your second query quickly scanned the 1999950 rows, then fetched the final 50 from disk. (Or possibly the last 50 came in already, since the unit of I/O is a "block" of records.)

With LIMIT and/or OFFSET, EXPLAIN rarely gives any clues -- it usually provides an estimate of the total number of rows in the table.

There is yet another problem with your examples... You have no ORDER BY. So, the engine is at liberty to deliver any rows it likes. Usually it is predictable, but sometimes you can get surprises.

But, once you add an ORDER BY, there may need to be a temp table and sort even before getting the first record! That is, SELECT ... ORDER BY .. LIMIT 50 may be as slow as all the others -- if you are ordering by something inconvenient for the indexes, etc, that are involved.

See how OFFSET sucks when paginating web pages. That includes a workaround by "remembering where you left off". And this shows how to get the next 1000 rows efficiently, even with gaps in ids.

Rick James
  • 135,179
  • 13
  • 127
  • 222
1

There is a solution to do it faster but it will work only if you have a primary key int and no gaps in this table. I mean you don't allow to perform deletion. In this case your query will look like:

SELECT * FROM table_records where id >= 1999950 order by id LIMIT 50;
Andrej
  • 7,474
  • 1
  • 19
  • 21
  • it will failed after deleting some records but I found solution from [Quassnoi's Answer](http://stackoverflow.com/questions/4481388/why-does-mysql-higher-limit-offset-slow-the-query-down) – user3151197 Aug 31 '16 at 06:37