I want to perform a number of operations on a postgresql database. These operations carry out a select on a table and then insert the resulting rows into a new table which has primary keys, ignoring rows which violate the primary key constraints. There are a large number of large tables in the database to be processed, and it seems that this sort of task should be run asynchronously.
It strikes me that one way to go about this would be to use the subprocess
module in Python to run bash scripts which perform these operations, using something like subprocess.Popen
. I can open many terminal sessions and execute queries in parallel and to my understanding this approach is imitating this.
To borrow an example from here:
from subprocess import Popen, PIPE
import glob
f_list = glob.glob('./*bz2')
cmds_list = [['./bunzip2_file.py', file_name] for file_name in f_list]
procs_list = [Popen(cmd, stdout=PIPE, stderr=PIPE) for cmd in cmds_list]
for proc in procs_list:
proc.wait()
My questions are:
Are there any obvious issues with calling many postgres queries using
subprocess
?Under what circumstances might I instead consider using
asyncio
? Does it provide any advantages to the method discussed above?