I'm curious of this behaviour:
test_db=# create table test_table(id bigint);
test_db=# insert into test_table(id) select * from generate_series(1, 1000000);
test_db=# select * from test_table offset 100000 limit 1;
id
-------
87169
(1 row)
test_db=# select * from test_table offset 100000 limit 1;
id
--------
186785
(1 row)
test_db=# select * from test_table offset 100000 limit 1;
id
--------
284417
(1 row)
Seems that postgres iterates forward with some randomizing rule. Why does large offset "mix" table? After that if we use small offset it returns "stable" value:
test_db=# select * from test_table offset 1 limit 1;
id
--------
282050
(1 row)
test_db=# select * from test_table offset 1 limit 1;
id
--------
282050
(1 row)