It can be very time-consuming to perform a bulk operation like yours on a large table. Why?
- The server may struggle to find the correct rows.
- Single queries (in InnoDB) have transaction semantics. That is, the server must be able to roll back the entire operation if any part of it fails. That means the server must store enough data to restore all deleted rows until the deletion operation completes. Doing that makes the server work hard.
- Other parts of your application may be accessing the same table simultaneously. If so, they wait their turn and your system blocks.
How to fix this.
- Make sure you have an index on your column called
DATE
so the server can find the correct rows efficiently.
- If other parts of your application query the database, put this statement right before the SELECT statements.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
. That tells the query it can go ahead even if it might get a soon-to-be DELETEd row.
- Best alternative : Do the delete in batches. Do this:
DELETE * FROM TABLE_NAME WHERE DATE >= (STARTDATE) AND DATE <= (EndDate) LIMIT 1000;
And repeat the delete operation until it processes zero rows. That uses mutiple transactions while keeping each one at a reasonable size so the server does not get choked by them.
This sort of thing can be done most simply in java with a loop like this pseudocode.
bool done = false;
while (!done) {
int rowcount = execute.Update("Delete Query with LIMIT clause");
if (rowcount <= 0) done = true;
}