I've been learning how to use sqlalchemy (I'm still very much a beginner). I am using the ORM (as opposed to the SQLAlchemy Expression Language) and have set up a number of scripts to put timestamped sensor data as it becomes available into the database using the simple
>>> session.add(query)
>>> session.commit()
approach.
On occasions, one of the scripts can get some 'new' data that isn't in fact new at all; the exact same data has already been processes and added to the database.
If if ignore this and simply add whatever data I get, I get an
(IntegrityError) duplicate key value violates unique constraint
I've initially worked around this by simply catching the exception and rolling back the transaction. However, this is causing many many such IntegrityErrors and clogging the DB error logs. Clearly this is a poor solution, instead I should be either updating the duplicate data, or first checking what data is already present and only adding the new stuff. There are an infinte number of ways to do this, but I'm sure there is an inbuilt simple and efficient approach (since this must be far from a unique issue).
What is the best way of solving this issue?