21

In postgresql, I added an index to a large table, and it took about 1 second (which, frankly, surprised me).

When I went to drop the index, I let it run for >200 seconds without it returning, and finally cancelled the drop operation.

CREATE INDEX idx_cservicelocationdivisionstatus_inversed
ON cservicelocationdivisionstatus (cservicelocationdivisionid, startdate, enddate DESC);

Takes very little time, but

DROP INDEX idx_cservicelocationdivisionstatus_inversed;

Took so long that I gave up and cancelled.

The table cservicelocationdivisionstatus has 6 columns and about 310k rows of data.

Why does removing an index take so much longer than creating it?

EDIT: This page indicates that for mySql, a table with multiple indexes will copy the table and re-insert all the rows without the index you're dropping. (Worse, with mySql, if you drop multiple indexes on the same table, it will re-copy the table once for each index you're removing, instead of being smart and re-copying the data once without all the indexes you're dropping.) Is something like this happening with postgres?

Timur Shtatland
  • 12,024
  • 2
  • 30
  • 47
Mar
  • 7,765
  • 9
  • 48
  • 82

2 Answers2

39

An index on a table of the size you mentioned should generally be able to be dropped pretty quickly (and certainly more quickly than 3+ minutes). It sounds to me like the table/index was in use, and therefore could not be dropped.

You can confirm that by querying the pg_stat_activity table and looking for activity involving the table on which you created the index.

khampson
  • 14,700
  • 4
  • 41
  • 43
  • 5
    This was the problem. Had a long-running query (>20 minutes) that was using the table. As soon as I killed the query, it took 51 ms to drop the index. – Mar Oct 13 '14 at 18:21
  • Ah. 51ms sounds more like it. =) – khampson Oct 13 '14 at 18:21
  • 12
    DROP INDEX CONCURRENTLY might be another quick fix for the issue. – kert Mar 02 '16 at 01:03
  • @kert: It wouldn't so much help with the time it takes the drop itself, but it would with how it affects other accessors. It removes the need for an exclusive lock and essentially gives accessors a chance to complete before the drop is done. But it does come with some caveats as well [(specifics here)](http://www.postgresql.org/docs/current/static/sql-dropindex.html). – khampson Mar 02 '16 at 02:40
  • Cool, the statement got immediately completed once other clients' connection are forcefully dropped. Hope pg can return some message useful in this case, instead of leaving users waiting so long for nothing. – Ben Jul 05 '17 at 15:32
  • 2
    @Ben: Glad it worked for your case. The reason Postgres doesn't return an error/message here is that this isn't an error/edge case, but rather expected locking behavior. If there was an actual deadlock -- see section '13.3.4 Deadlocks' in https://www.postgresql.org/docs/current/static/explicit-locking.html for more info -- Postgres would detect that and provide an exception message. – khampson Jul 06 '17 at 00:33
  • Thanks, the locking part is very useful. – Ben Jul 06 '17 at 03:00
  • @Ben: Sure, no problem. – khampson Jul 06 '17 at 03:18
  • I had to restart the service (windows 10 with postgresql 9.5) as some view was still running it's query – Daniël Tulp Jan 23 '18 at 19:19
  • 4
    My problem was that the massive table was doing autovacuum so it was blocked until that finished. It was a 'drop index concurrently' but that made no difference to it being blocked. – Ponny Jul 20 '18 at 05:05
0

For MySQL, the following commands worked for me to remove long-running tasks. As the top answer by khampson stated, this was my problem as well.

SHOW FULL PROCESSLIST
kill {SPID value}
  1. How can I view live MySQL queries?
  2. How to kill/stop a long SQL query immediately?
Joshua Wolff
  • 2,687
  • 1
  • 25
  • 42