3

Scenario: I am displaying a table of records. It initially displays the first 500 with "show more" at the bottom, which returns the next 500.

Issue: If between initial display and clicking "show more" 1 record is added, that will cause "order by date, offset 500, limit 500" to overlap by 1 row.

I'd like to "order by date, offset until 'id of last row shown', limit 500"

My row IDs are UUIDs. I am open to alternative approaches that achieve the same result.

Joseph Lennox
  • 3,202
  • 1
  • 27
  • 25

2 Answers2

3

i think you can use a subquery in the where to accomplish this.

e.g. given you're paginating through a users table, and you want the records after a given user:

SELECT *
  FROM users
  WHERE created_at > (
    SELECT created_at
      FROM users
      WHERE users.id = '00000000-1111-2222-3333-444444444444'
      LIMIT 1
  )
  ORDER BY created_at DESC limit 5;
schpet
  • 9,664
  • 6
  • 32
  • 35
2

If you can order by ID, you can paginate using

where id > $last_seen_id limit 500

but that's not going to be useful where you're sorting by date.

Sort stability!

I really hope that "date" actually means "timestamp" though, otherwise your ordering will be unstable and you can miss rows in pagination; you'll have to order by date, id to get stable ordering if it's really a date, and should probably do so even for timestamp.

State on client

One option is to push the state out to the client. Have the client remember the last-seen (date,id) tuple, and use:

where date > $last_seen_date and id > $last_seen_id limit 500

Cursors

Do you care about scalability? If not, you can use a server-side cursor. Declare the cursor for the full query, without the LIMIT. Then FETCH chunks of rows as requested. To do this your app must have a way to consistently bind a connection to a specific user's requests, though, and not to reset that connection or return it to the pool between requests. This might not be practical with your pool/framework, but is probably the best solution if you can do it.

Temp tables

Another even less scalable option is to CREATE TABLE sessiondata.myuser_myrequest_blah AS SELECT .... then paginate that table. It's guaranteed not to change. This avoids the difficulty of needing to keep a consistent connection across requests, but will have a very slow first-request response time and is completely impractical for large user counts or large amounts of data.

Related questions

Community
  • 1
  • 1
Craig Ringer
  • 307,061
  • 76
  • 688
  • 778