0

I've a tool which is basically built on shell scripts and each time the test runs, a single shell script (SqlCmd.sh) connects to the multiple schemas of a single DB(through sqlplus) for performing various operations(select, insert, update...etc).

So, the DB connections are sometimes are not accurately closed...

What i want is one stable connection throughout all of the sql operations and to avoid having dirty DB connects.

I've trying to implement a SQL DB handling using python connection pool concept that establishes the DB connection and to have a continuous DB connect, but i'm not sure if the shell scripts could be call from the python connection pool and if it would be feasible!

OR

Is there any other way to solve this and make a stable DB connection in my environment. Please let me know if any other information is required.

Thanks in advance!

User123
  • 1,498
  • 2
  • 12
  • 26
  • 5
    I don't think it will be easy to pass connections created by a Python script to a shell script (it's probably impossible and if possible it's not the right way to solve this). I would recommend writing the whole thing in Python, or focusing fixing your connection not closing properly problem – shevron Apr 12 '21 at 06:48
  • I'm assuming you're asking how to execute commands from the command line with a python script? you can use the subprocess module (e.g `import subprocess; subprocess.Popen("", shell=True`). If you want to execute bash scripts, you would do something like `subprocess.Popen("./script-name", shell=True) ` or `subprocess.Popen("python3 filename.py", shell=True)` for python scripts and so on – Matthew Schell Apr 15 '21 at 00:05
  • @MatthewSchell: Thanks for the info, but i'm looking for creating a connection pool in python and then passing those connections to the shell scripts containing the sql statements...this is required to have a stable db connection throughout the whole process without multiple connect and disconnect to the db for each sql operation...Thanks! – User123 Apr 15 '21 at 04:12
  • @User123 can you please provide us with some code? I would like to try and help you further – Matthew Schell Apr 15 '21 at 12:41
  • 1
    Look for ***`newConnector`*** in [this answer: How do I set a variable to the output of a command in Bash?](https://stackoverflow.com/a/41236640/1765658) – F. Hauri - Give Up GitHub Apr 15 '21 at 19:34
  • @F.Hauri that's why I put the word "probably", and "if possible". I am well aware of POSIX abilities to pass file handlers and open connections between processes, I just think it's a hell of a complex solution to a problem that can be avoided entirely with better design, is all. – shevron Apr 20 '21 at 18:14

1 Answers1

2

Using coproc

You could run OracleDB client as background IO process, in order to inititate only one connection, the keep them open until end of your script.

Something like:

coproc sqlplus -with args -to -connect DB
echo >&${COPROC[1]} "SELECT 3*3 as foo;"
while read -t 1 -ru $COPROC answer ;do
    echo "$answer"
done

... to start.

Then in order to avoid useless timeout (read -t 1), you could prepare some boundary.

Unfortunately (or hopefully) I don't know Oracle DB.

The trick is to make a boundary prompt by using column header in answer mechanism, then stop reading answer once boundary recieved:

echo >&${COPROC[1]} "$SqlQuery;"'SELECT date(now) as `myBoundary`;'
while read -ru $COPROC answer ;do
    [ "$anser" = "myBoundary" ] && break
     ...
done
read -ru $COPROC lastAnswerDate

I've been posted a full sample at shell_connector.sh. This script could be sourced to use newConnector function or simply run to see a little demo, using sqlite, in action.

This script is ready to use with sqlite, mariadb and postgreSQL. If you successfully create a new connector for OracleDB, feel free to share!

Using simple one way fd

If your script don't read anything back from your sql client, you could use a single fifo, then write subsequent request to them:

mkfifo /path/to/mysqlfifo
exec sqlplus -with options </path/to/mysqlfifo &
SQLPLUSPID=$!

Then you could use many time:

echo >/path/to/mysqlfifo "SELECT count(*) FROM SYNC_JOB WHERE STATUS IN('R', 'P');"

...

echo >/path/to/mysqlfifo "UPDATE SYNC_JOB set status='S' WHERE STATUS in('R', 'P')"

Don't forget to

echo >/path/to/mysqlfifo "quit;" # Or exit command regarding to your client's syntax

kill $SQLPLUSPID

Under you could avoid external fifo

exec {SQLCLNT}> >(exec sqlplus -with options)
SQLPLUSPID=$!

Then you could use many time:

echo >&$SQLCLNT "SELECT count(*) FROM SYNC_JOB WHERE STATUS IN('R', 'P');"
...
echo >&$SQLCLNT "UPDATE SYNC_JOB set status='S' WHERE STATUS in('R', 'P')"

And once done

echo >&$SQLCLNT "\\q"
kill $SQLPLUSPID
F. Hauri - Give Up GitHub
  • 64,122
  • 17
  • 116
  • 137
  • Sorry for late response, just wanted to confirm if multiple schemas can be connected for executing the sql statements? I've updated the log pattern of the sql statement's execution , where the db connection is created each the script(SqlCmd.sh) starts and closes the connection on the end of the script. – User123 Apr 19 '21 at 10:46
  • Yes, once connection open, they could be reused as many time required, until script exit. I use this to create temporary sql tables, process on many time until end of script. From script, you could even run subshell and permit them to use your already openned connection... – F. Hauri - Give Up GitHub Apr 19 '21 at 16:00
  • @User123 Looking your logs, you have to run `coproc tsql`, then send request as `echo >&${COPROC[1]} 'SELECT count(*) FROM SYNC..'` and read answer by using `read -u $COPROC answer`... But this don't use `python`, only [tag:bash]! – F. Hauri - Give Up GitHub Apr 20 '21 at 07:02
  • The trick is to **edit** your `SqlCmd.sh` in order to replace each call to `tsql` by a function like `myTsql` regarding my sample script [shell_connector](https://f-hauri.ch/vrac/shell_connector.sh.txt) – F. Hauri - Give Up GitHub Apr 20 '21 at 09:03