86

I have duplicate rows in my table and I want to delete duplicates in the most efficient way since the table is big. After some research, I have come up with this query:

WITH TempEmp AS
(
SELECT name, ROW_NUMBER() OVER(PARTITION by name, address, zipcode ORDER BY name) AS duplicateRecCount
FROM mytable
)
-- Now Delete Duplicate Records
DELETE FROM TempEmp
WHERE duplicateRecCount > 1;

But it only works in SQL, not in Netezza. It would seem that it does not like the DELETE after the WITH clause?

Braiam
  • 1
  • 11
  • 47
  • 78
moe
  • 5,149
  • 38
  • 130
  • 197
  • If it's a one time job - why wouldn't you run it in postgresql console? – zerkms Nov 06 '14 at 00:02
  • not it is not one time job but it is weekly and we always get some duplicate values. thanks – moe Nov 06 '14 at 00:06
  • 4
    why do you get duplicate values? What if you just don't put it there at first place? – zerkms Nov 06 '14 at 03:02
  • Are duplicates defined by the columns `(name, address, zipcode)`? Are there other columns? Are those irrelevant? Different? Is any combination of columns unique? If some columns differ between duplicates, which row out of each set do you want to keep? – Erwin Brandstetter Nov 06 '14 at 06:13
  • 1
    WORKS FOR POSTGRESQL (ALSO WORKS IN AWS REDSHIFT) [View the answer to this question on another page](https://stackoverflow.com/a/45606037/7712680) – Golokesh Patra Aug 10 '17 at 09:02

10 Answers10

93

I like @erwin-brandstetter 's solution, but wanted to show a solution with the USING keyword:

DELETE   FROM table_with_dups T1
  USING       table_with_dups T2
WHERE  T1.ctid    < T2.ctid       -- delete the "older" ones
  AND  T1.name    = T2.name       -- list columns that define duplicates
  AND  T1.address = T2.address
  AND  T1.zipcode = T2.zipcode;

If you want to review the records before deleting them, then simply replace DELETE with SELECT * and USING with a comma ,, i.e.

SELECT * FROM table_with_dups T1
  ,           table_with_dups T2
WHERE  T1.ctid    < T2.ctid       -- select the "older" ones
  AND  T1.name    = T2.name       -- list columns that define duplicates
  AND  T1.address = T2.address
  AND  T1.zipcode = T2.zipcode;

Update: I tested some of the different solutions here for speed. If you don't expect many duplicates, then this solution performs much better than the ones that have a NOT IN (...) clause as those generate a lot of rows in the subquery.

If you rewrite the query to use IN (...) then it performs similarly to the solution presented here, but the SQL code becomes much less concise.

Update 2: If you have NULL values in one of the key columns (which you really shouldn't IMO), then you can use COALESCE() in the condition for that column, e.g.

  AND COALESCE(T1.col_with_nulls, '[NULL]') = COALESCE(T2.col_with_nulls, '[NULL]')
isapir
  • 21,295
  • 13
  • 115
  • 116
  • 1
    Erwin's answer is better because it handles `NULL` values correctly and does not require typing in the column names twice. – Gordon Linoff Oct 19 '17 at 18:52
  • 2
    As I've written in the beginning of my answer: `I like @erwin-brandstetter 's solution, but wanted to show a solution ...`. Upon finding the performance benefits though, I like the `USING` solution better, especially for large tables. I added an example that shows how to deal with `NULL` values. – isapir Oct 19 '17 at 19:05
  • 1
    Very nice, especially the possibility to have a look first. To check for NULL values in the data columns, I generated a `T1.col = T2.col OR (T1.col IS NULL AND T2.col IS NULL)` criterion for each column, based on the `\dS` output of my table. Now I can add my primary key constraint. – Tobias Mar 13 '18 at 14:30
  • You can test the NULL values with coalesce(), as desired in Update 2 of my answer. – isapir Mar 13 '18 at 14:34
  • Right. When generating the query with regular expressions, it doesn't really matter, and my expressions are foolproof, even if there might be (for whatever reason) `'[NULL]'` values in some text fields. – Tobias Mar 13 '18 at 14:43
  • The SELECT statement shows only one record if they are duplicated (good), but shows the 3 of them if triplicated, thus DELETE would delete all of them (!!). – David Suarez Sep 10 '18 at 10:21
  • 1
    Thanks, this proved much faster than other solutions. I gave up after 1 hour for some of the versions out there, this was done almost instantly – user1978816 Oct 18 '18 at 14:13
  • @DavidSuarez No, it would not delete all of them. SELECT shows 3 combinations - (A, B), (A, C), (B, C) - but DELETE deletes only the first row in each combination, thus it would supposedly delete A row twice. Row C will stay intact in any way, only 2 rows would be deleted. – Jānis Elmeris May 26 '21 at 10:39
  • 1
    Helpful solution for me as I could visually check the delete list prior to execution. – qnm Feb 16 '22 at 01:33
68

If you have no other unique identifier, you can use ctid:

delete from mytable
    where exists (select 1
                  from mytable t2
                  where t2.name = mytable.name and
                        t2.address = mytable.address and
                        t2.zip = mytable.zip and
                        t2.ctid > mytable.ctid
                 );

It is a good idea to have a unique, auto-incrementing id in every table. Doing a delete like this is one important reason why.

Juan Carlos Oropeza
  • 47,252
  • 12
  • 78
  • 118
Gordon Linoff
  • 1,242,037
  • 58
  • 646
  • 786
  • i don't have any field called ctid in my table can you explain where you got this? thanks – moe Nov 06 '14 at 00:34
  • 2
    ctid is a hidden field. It does not show up when you retrieve the table definition. It is a kind of internal row number. – wildplasser Nov 06 '14 at 01:14
  • @moe . . . The documentation is here: http://www.postgresql.org/docs/9.2/static/ddl-system-columns.html. – Gordon Linoff Nov 06 '14 at 02:03
  • 2
    `where not exists` will delete the rows **without** duplicates. Should be `where exists (select 1` ` – Juan Carlos Oropeza Mar 04 '16 at 15:39
  • Note this will not work with a json field: `operator does not exist: json = json` – anmol Jun 27 '17 at 16:08
  • OT: I'm looking at your book on Amazon. Are the examples in the book written for Postgres or are they more generic SQL or written for some other DBMS? – isapir Oct 19 '17 at 19:22
  • @isapir . . . That question is very off-topic for a comment. Presumably, you are referring to "Data Analysis Using SQL and Excel". The original version was written for SQL Server. I think the second version was also for SQL Server (although in all honesty, much of the work was done in Postgres and then translated back). – Gordon Linoff Oct 20 '17 at 00:56
  • 1
    @GordonLinoff - Thanks for clarifying. I know that it's off-topic; that's what `OT:` stands for in the prefix of my question ;) – isapir Oct 20 '17 at 16:11
  • my table just couldn't have a unique identifier in any way.. – stucash Jun 16 '20 at 18:22
  • 1
    In my small table I did: `select ctid, * from table`. `ctid` was represented as (0,1), (0,2), etc. So I was able to do a simple delete statement for the duplicate row: `delete from table where ctid = '(0,1)'` – Edward Jun 09 '21 at 21:54
44

In a perfect world, every table has a unique identifier of some sort.
In the absence of any unique column (or combination thereof), use the ctid column:

DELETE FROM tbl
WHERE  ctid NOT IN (
   SELECT min(ctid)                    -- ctid is NOT NULL by definition
   FROM   tbl
   GROUP  BY name, address, zipcode);  -- list columns defining duplicates

The above query is short, conveniently listing column names only once. NOT IN (SELECT ...) is a tricky query style when NULL values can be involved, but the system column ctid is never NULL. See:

Using EXISTS as demonstrated by @Gordon is typically faster. So is a self-join with the USING clause like @isapir added later. Both should result in the same query plan.

Important difference: These other queries treat NULL values as not equal, while GROUP BY (or DISTINCT or DISTINCT ON ()) treats NULL values as equal. Does not matter for columns defined NOT NULL. Else, depending on your definition of "duplicate", you'll need one approach or the other. Or use IS NOT DISTINCT FROM to compare values (which may exclude some indexes).

Disclaimer:

ctid is an implementation detail of Postgres, it's not in the SQL standard and can change between major versions without warning (even if that's very unlikely). Its values can change between commands due to background processes or concurrent write operations (but not within the same command).

Related:

Aside:

The target of a DELETE statement cannot be the CTE, only the underlying table. That's a spillover from SQL Server - as is your whole approach.

Erwin Brandstetter
  • 605,456
  • 145
  • 1,078
  • 1,228
  • I like this solution because it's very concise. Any thoughts about the performance of the solution that I posted below? https://stackoverflow.com/a/46775289/968244 – isapir Oct 16 '17 at 17:04
  • I was actually able to test it. I have a table with about 350k rows and it had 39 duplicates over 7 columns with no indices. I tried the `GROUP BY` solution first and it was taking over 30 seconds so I killed it. I then tried the `USING` solution and it completed in about 16 seconds. – isapir Oct 16 '17 at 17:15
  • 1
    @isapir: Like I mentioned back in 2014: `NOT IN` is conveniently short syntax, but `EXISTS` is faster. (Same as your completely valid query with the `USING` clause.) But there is a subtle difference. I added a note above. – Erwin Brandstetter Oct 17 '17 at 04:01
  • Cool. Thanks for clarifying. – isapir Oct 17 '17 at 04:41
10

Here is what I came up with, using a group by

DELETE FROM mytable
WHERE id NOT in (
  SELECT MIN(id) 
  FROM mytable
  GROUP BY name, address, zipcode
)

It deletes the duplicates, preserving the oldest record that has duplicates.

Bruno Calza
  • 2,732
  • 2
  • 23
  • 25
9

We can use a window function for very effective removal of duplicate rows:

DELETE FROM tab 
  WHERE id IN (SELECT id 
                  FROM (SELECT row_number() OVER (PARTITION BY column_with_duplicate_values), id 
                           FROM tab) x 
                 WHERE x.row_number > 1);

Some PostgreSQL's optimized version (with ctid):

DELETE FROM tab 
  WHERE ctid = ANY(ARRAY(SELECT ctid 
                  FROM (SELECT row_number() OVER (PARTITION BY column_with_duplicate_values), ctid 
                           FROM tab) x 
                 WHERE x.row_number > 1));
Vivek S.
  • 19,945
  • 7
  • 68
  • 85
3

The valid syntax is specified at http://www.postgresql.org/docs/current/static/sql-delete.html

I would ALTER your table to add a unique auto-incrementing primary key id so that you can run a query like the following which will keep the first of each set of duplicates (ie the one with the lowest id). Note that adding the key is a bit more complicated in Postgres than some other DBs.

DELETE FROM mytable d USING (
  SELECT min(id), name, address, zip 
  FROM mytable 
  GROUP BY name, address, zip HAVING COUNT() > 1
) AS k 
WHERE d.id <> k.id 
AND d.name=k.name 
AND d.address=k.address 
AND d.zip=k.zip;
Joe Murray
  • 585
  • 5
  • 21
2

To remove duplicates (keep only one entry) from a table "tab" where data looks like this:

fk_id_1 fk_id_2
12 32
12 32
12 32
15 37
15 37

You can do this:

DELETE FROM tab WHERE ctid IN 
  (SELECT ctid FROM 
    (SELECT ctid, fk_id_1, fk_id_2, row_number() OVER (PARTITION BY fk_id_1, fk_id_2 ORDER BY fk_id_1) AS rnum FROM tab) t 
  WHERE t.rnum > 1);

Where ctid is the physical location of the row within its table (therefore, a row identifier) and row_number is a window function that assigns a sequential integer to each row in a result set.

PARTITION groups the result set and the sequential integer is restarted for every group.

James Risner
  • 5,451
  • 11
  • 25
  • 47
1

If you want a unique identifier for every row, you could just add one (a serial, or a guid), and treat it like a surrogate key.


CREATE TABLE thenames
        ( name text not null
        , address text not null
        , zipcode text not null
        );
INSERT INTO thenames(name,address,zipcode) VALUES
('James', 'main street', '123' )
,('James', 'main street', '123' )
,('James', 'void street', '456')
,('Alice', 'union square' , '123')
        ;

SELECT*FROM thenames;

        -- add a surrogate key
ALTER TABLE thenames
        ADD COLUMN seq serial NOT NULL PRIMARY KEY
        ;
SELECT*FROM thenames;

DELETE FROM thenames del
WHERE EXISTS(
        SELECT*FROM thenames x
        WHERE x.name=del.name
        AND x.address=del.address
        AND x.zipcode=del.zipcode
        AND x.seq < del.seq
        );

        -- add the unique constrain,so that new dupplicates cannot be created in the future
ALTER TABLE thenames
        ADD UNIQUE (name,address,zipcode)
        ;

SELECT*FROM thenames;
wildplasser
  • 43,142
  • 8
  • 66
  • 109
0

From the documentation delete duplicate rows

A frequent question in IRC is how to delete rows that are duplicates over a set of columns, keeping only the one with the lowest ID. This query does that for all rows of tablename having the same column1, column2, and column3.

DELETE FROM tablename
WHERE id IN (SELECT id
          FROM (SELECT id,
                         ROW_NUMBER() OVER (partition BY column1, column2, column3 ORDER BY id) AS rnum
                 FROM tablename) t
          WHERE t.rnum > 1);

Sometimes a timestamp field is used instead of an ID field.

Chad Crowe
  • 1,260
  • 1
  • 16
  • 21
0

For smaller tables, we can use rowid pseudo column to delete duplicate rows.

You can use this query below:

Delete from table1 t1 where t1.rowid > (select min(t2.rowid) from table1 t2 where t1.column = t2. column)