2

You'd immediately think I went straight to here to ask my question but I googled an awful lot to not find a decisive answer.

Facts: I have a table with 3.3 million rows, 20 columns. The first row is the primary key thus unique.

I have to remove all the rows where column 2 till column 11 is duplicate. In fact a basic question but so much different approaches whereas everyone seeks the same solution in the end, removing the duplicates.

I was personally thinking about GROUP BY HAVING COUNT(*) > 1 Is that the way to go or what do you suggest?

Thanks a lot in advance! L

laurens
  • 497
  • 14
  • 28
  • I think you meant column about the PK, but hey if you get it we get it ;). – JonH Jan 05 '11 at 18:01
  • I assume you meant column2 through column 11? – Kenneth Jan 05 '11 at 18:01
  • 1
    @Kenneth - Good question. @laurens If you did mean column on average how many duplicates are there per group? The percentage of the table that needs deleting will be a factor. Related: http://stackoverflow.com/questions/18932/sql-how-can-i-remove-duplicate-rows – Martin Smith Jan 05 '11 at 18:03
  • Also see this related question and answers: http://stackoverflow.com/questions/3345268/how-to-delete-completely-duplicate-rows – marc_s Jan 05 '11 at 18:21
  • First of all, thanks for all the answers. I will immediately start testing all of them but clearly CTE is the way to go. Of course I meant column2 till column11 instead of row, stupid mistake! – laurens Jan 06 '11 at 09:29
  • @Martin. The percentage of duplicates in this table is around 75% (!) – laurens Jan 06 '11 at 10:26

4 Answers4

7

As a generic answer:

WITH cte AS (
  SELECT ROW_NUMBER() OVER (
      PARTITION BY <groupbyfield> ORDER BY <tiebreaker>) as rn
  FROM Table)
DELETE FROM cte
WHERE rn > 1;

I find this more powerful and flexible than the GROUP BY ... HAVING. In fact, GROUP BY ... HAVING only gives you the duplicates, you're still left with the 'trivial' task of choosing a 'keeper' amongst the duplicates.

ROW_NUMBER OVER (...) gives more control over how to distinguish among duplicates (the tiebreaker) and allows for behavior like 'keep first 3 of the duplicates', not only 'keep just 1', which is a behavior really hard to do with GROUP BY ... HAVING.

The other part of your question is how to approach this for 3.3M rows. Well, 3.3M is not really that big, but I would still recommend doing this in batches. Delete TOP 10000 at a time, otherwise you'll push a huge transaction into the log and might overwhelm your log drives.

And final question is whether this will perform acceptably. It depends on your schema. IF the ROW_NUMBER() has to scan the entire table and spool to count, and you have to repeat this in batches for N times, then it won't perform. An appropriate index will help. But I can't say anything more, not knowing the exact schema involved (structure of clustered index/heap, all non-clustered indexes etc).

Remus Rusanu
  • 288,378
  • 40
  • 442
  • 569
  • I would like to vote up but just not enough reputation...! anyway, this what you need, the can be extended with more columns like I did Thank you!! – laurens Jan 06 '11 at 16:34
  • 1
    WITH cte AS ( SELECT ROW_NUMBER() OVER ( PARTITION BY [VIN],[DATE],[MILEAGE],[OPERATION],[SLAnr],[SiteID],[RO] ORDER BY [VIN]) as rn FROM [AD_MASTER].[dbo].[AS_VIN_HISTORY]) DELETE FROM cte WHERE rn > 1; – laurens Jan 06 '11 at 16:35
2

Group by the fields you want to be unique, and get an aggregate value (like min) for your pk field. Then insert those results into a new table.

Beth
  • 9,531
  • 1
  • 24
  • 43
  • +1, just might not need to insert them into a new table depending on your end result is all :) – James Jan 05 '11 at 17:59
  • I would put it in a new table for debugging purposes and rename or archive the existing table. I wouldn't delete these rows without testing the results with the users and gaining their approval. – Beth Jan 05 '11 at 19:49
2

If you have SQL Server 2005 or newer, then the easiest way would be to use a CTE (Common Table Expression).

You need to know what criteria you want to "partition" your data by - e.g. create partitions of data that is considered identical/duplicate - and then you need to order those partitions by something - e.g. a sequence ID, a date/time or something.

You didn't provide much details about your tables - so let me just give you a sample:

;WITH Duplicates AS
(
   SELECT  
       OrderID,
       ROW_NUMBER() OVER (PARTITION BY CustomerID ORDER BY OrderDate DESC) AS RowN
   FROM
       dbo.Orders
)
DELETE FROM dbo.Orders
WHERE RowN > 1

The CTE ( WITH ... AS :... ) gives you an "inline view" for the next SQL statement - it's not persisted or anything - it just lives for that next statement and then it's gone.

Basically, I'm "grouping" (partitioning) my Orders by CustomerID, and ordering by OrderDate. So for each CustomerID, I get a new "group" of data, which gets a row number starting with 1. The ORDER BY OrderDate DESC gives the newest order for each customer the RowN = 1 value - this is the one order I keep.

All other orders for each customer are deleted based on the CTE (the WITH..... expression).

You'll need to adapt this for your own situation, obviously - but the CTE with the PARTITION BY and ROW_NUMBER() are a very reliable and easy technique to get rid of duplicates.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
1

If you don't want to deal with a new table delete then just use DELETE TOP(1). Use a subquery to get all the ids of rows that are duplicates and then use the delete top to delete where there is multiple rows. You might have to run more than once if there are more than one duplicate but you get the point.

DELETE TOP(1) FROM Table
WHERE ID IN (SELECT ID FROM Table GROUP BY Field HAVING COUNT(*) > 1)

You get the idea hopefully. This is just some pseudo code to help demonstrate.

spinon
  • 10,760
  • 5
  • 41
  • 59
  • If you do a `DELETE TOP(1) FROM Table`, I would also make sure to have a proper `ORDER BY` clause in place! Otherwise what will the `TOP` refer to?? – marc_s Jan 05 '11 at 18:19
  • Yeah I guess since not all the columns are the same you would want to have that. I was just thinking about duplicate meaning all data is the same. But that is not the case. Thanks for mentioning that. – spinon Jan 05 '11 at 19:04