I have a table with several hundred million rows of data. I want to delete the table, but every operation I perform on the table loses connection after running for 50,000+ seconds (about 16 hours), which is under the 60,000 second time out condition I have set in the database. I've tried creating a stored procedure with the Drop Table code thinking that if I send the info to the DB to perform the operation it will not need a connection to process it, but it does the same thing. Is it just timing out? Or do I need to do something else?
Asked
Active
Viewed 2,304 times
0
-
Perhaps try deleting the data in batches? Delete a few million rows at a time based on either the primary key or the row number. Then after you've deleted all the data you should be able to drop the table. – Chris Thornton Jun 03 '17 at 22:17
-
I'll give that a shot, but even running a simple query with a Limit 0, 10 "loses connection". – VaderFlavoredIceCream Jun 03 '17 at 22:18
-
I'm trying the following code : DELETE FROM Table_Data WHERE Company_ID LIKE '%A%' LIMIT 0, 1000000; It's currently on hour 14 of running. – VaderFlavoredIceCream Jun 04 '17 at 11:49
-
I did change the session limit from 60000 to 600000. (added a 0). – VaderFlavoredIceCream Jun 04 '17 at 11:51
-
I'd try dropping all the indices related to the table and trying a truncate. – Keith John Hutchison Jun 08 '17 at 09:04
1 Answers
0
Instead do TRUNCATE TABLE
. Internally it creates an equivalent, but empty, table, then swaps. This technique might take a second, even for a very big table.
If you are deleting most of a table, then it is usually faster (sometimes a lot faster), to do
CREATE TABLE new LIKE real;
INSERT INTO new
SELECT ... FROM real
WHERE ... -- the rows you want to keep
Why do you need to delete everything?
For other techniques in massive deletes, including big chunks out of a huge table, see https://mariadb.com/kb/en/mariadb/big-deletes/

Rick James
- 135,179
- 13
- 127
- 222
-
I tried TRUNCATE also, but it just runs until it times out. I changed the time out session limit and am trying these methods again. I'm deleting because every time I run a query on the table it times out close to the original 60,000 seconds. So, I was hoping to just start over and split the table into smaller segments in order to reduce the time for queries. There is also duplicate information in the table, so it made sense to just start over. – VaderFlavoredIceCream Jun 06 '17 at 02:32
-
InnoDB should have "rolled back" the inserts if it died before finishing??? Please provide `SHOW CREATE TABLE`. – Rick James Jun 06 '17 at 02:36