1

For a simple select query like select column_name from table_name on a very large table, is it possible to have the output being provided as the scan of the table progresses?

If I abort the command after sometime, I expect to get output from the select at least thus far.

Think cat, which I believe won't wait till it completes the full read of the file.

Does MySQL or other RDBMS systems support this?

sam
  • 777
  • 2
  • 6
  • 19
  • 2
    This is -in fact- the standard behaviour. [but the frontend *might* buffer the produced data until it is complete] – wildplasser Mar 06 '19 at 21:17
  • With Python/psycopg2, you can obtain rows as they become ready [using a server-side cursor](https://stackoverflow.com/a/48734989/190597). – unutbu Mar 06 '19 at 22:57

1 Answers1

1

PostgreSQL always streams the result to the client, and usually it is the client library that collects the whole result set before returning it to the user.

The C API libpq has functionality that supports this. The main disadvantage with this approach is that you could get a run time error after you already have received a some rows, so that's a case you'd have to handle.

The traditional way to receive a query result in parts is to use a cursor and fetch results from it. This is a technique supported by all client APIs.

Cursors are probably what you are looking for, and they are supported by all RDBMS I know in some fashion.

unutbu
  • 842,883
  • 184
  • 1,785
  • 1,677
Laurenz Albe
  • 209,280
  • 17
  • 206
  • 263