I'm looking for advice on efficient ways to stream data incrementally from a Postgres table into Python. I'm in the process of implementing an online learning algorithm and I want to read batches of training examples from the database table into memory to be processed. Any thoughts on good ways to maximize throughput? Thanks for your suggestions.
Asked
Active
Viewed 1.3k times
14
-
Please elaborate how the date will be structured, "streaming" it somewhere could just mean dumping the table and reading that from stdout (which is fast, and probably mostly limited by your I/O capabilities). But I suspect you want some structure, and what one should do is heavily dependent on that, – knitti Feb 24 '14 at 23:06
-
Nothing fancy here. Each row corresponds to a particular feature vector often with integer or floating point values. I am just scanning through the rows of a single table. Having it in Postgres is a convenience for query when additional attribute data is available. – Chris Feb 25 '14 at 00:00
2 Answers
22
If you are using psycopg2, then you will want to use a named cursor, otherwise it will try to read the entire query data into memory at once.
cursor = conn.cursor("some_unique_name")
cursor.execute("SELECT aid FROM pgbench_accounts")
for record in cursor:
something(record)
This will fetch the records from the server in batches of 2000 (default value of itersize
) and then parcel them out to the loop one at a time.

jjanes
- 37,812
- 5
- 27
- 34
-
Note that you you should set `itersize`; see http://initd.org/psycopg/docs/cursor.html – Craig Ringer Feb 25 '14 at 00:53
-
only if you want to customise the size. By default itersize for named cursors is 2000 – saccodd Dec 18 '19 at 11:16
-
0
You may want to look into the Postgres LISTEN/NOTIFY functionality https://www.postgresql.org/docs/9.1/static/sql-notify.html

Harvey
- 617
- 8
- 18