I have a table called "unprocessed" where I want to read 2000 rows, send them over HTTP to another server and then insert the rows into a "processed" table and remove them from the "unprocessed" table.
My python code roughly looks like this:
db = MySQLdb.connect("localhost","username","password","database" )
# prepare a cursor object using cursor() method
cursor = db.cursor()
# Select all the records not yet sent
sql = "SELECT * from unprocessed where SupplierIDToUse = 'supplier1' limit 0, 2000"
cursor.execute(sql)
results = cursor.fetchall()
for row in results:
id = row[0]
<code is here here for sending to other server - it takes about 1/2 a second>
if sentcorrectly="1":
sql = "INSERT into processed (id, dateprocessed) VALUES ('%s', NOW()')" % (id)
try:
inserted = cursor.execute(sql)
except:
print "Failed to insert"
if inserted:
print "Inserted"
sql = "DELETE from unprocessed where id = '%s'" % (id)
try:
deleted = cursor.execute(sql)
except:
print "Failed to delete id from the unprocessed table, even though it was saved in the processed table."
db.close()
sys.exit(0)
I want to be able to run this code concurrently so that I can increase the speed of sending these records to the other server over HTTP. At the moment if I try and run the code concurrently I get multiple copies of the same data sent top the other server and saved into the the "processed" table as the select query is getting the same id's in multiple instances of the code.
How can I lock the records when I select them and then process each record as a row before moving them to the "processed" table? The table was MyISAM but I've converted to innoDB today as I realise there's probably a way of locking the records better with innoDB.