I don't think that's a good idea actually - it will make your code much harder to read and maintain for no good reason (and let's not talk about the cost of opening/reading files for each and any query).
If your goal is to decouple data access (how you get/store data) from business logic (how you use data), a better solution would be to write a data acces module with the proper functions and call those functions from your main code, ie:
# datalayer.py
import atexit
_conn = <open your connection here>
def close_connection():
if _conn:
_conn.close()
_conn = None
atexit.register(close_connection)
# FIXME: proper arg names
def update_quantity(x, y, z, commit=True):
c = _conn.cursor()
try:
r = c.execute('INSERT OR IGNORE INTO quantity VALUES (?,?,?)', (x, y, z))
except Exception as e:
c.close()
raise
if commit:
_conn.commit()
return r
Then in your code:
import datalayer
def func():
# do things here
datalayer.update_quantities(a, b, c)
# etc
This is of course a very dumbed down example - depending on your app's complexity you may want to have distinct classes for different sets of operations (depending on which datasets you're working etc), and in all cases you'll probably want to make sure the connection is always properly closed, that it try and reopens when it's been closed by the server etc, but you get the main idea.