7

I have a table with many duplicated rows - but I only want to deduplicate rows one partition at a time.

How can I do this?

As an example, you can start with a table partitioned by date and filled with random integers from 1 to 5:

CREATE OR REPLACE TABLE `temp.many_random`
PARTITION BY d
AS 
SELECT DATE('2018-10-01') d, fhoffa.x.random_int(0,5) random_int
FROM UNNEST(GENERATE_ARRAY(1, 100))
UNION ALL
SELECT CURRENT_DATE() d, fhoffa.x.random_int(0,5) random_int
FROM UNNEST(GENERATE_ARRAY(1, 100))
Felipe Hoffa
  • 54,922
  • 16
  • 151
  • 325

3 Answers3

13

Let's see what data we have in the existing table:

SELECT d, random_int, COUNT(*) c
FROM `temp.many_random`
GROUP BY 1, 2
ORDER BY 1,2

enter image description here

That's a lot of duplicates!

We can de-duplicate one single partition using MERGE and SELECT DISTINCT * with a query like this:

MERGE `temp.many_random` t
USING (
  SELECT DISTINCT *
  FROM `temp.many_random`
  WHERE d=CURRENT_DATE()
)
ON FALSE
WHEN NOT MATCHED BY SOURCE AND d=CURRENT_DATE() THEN DELETE
WHEN NOT MATCHED BY TARGET THEN INSERT ROW

Then the end result looks like this:

enter image description here

We need to make sure to have the same date in the SELECT and the row with THEN DELETE. This will delete all rows on that partition, and insert all rows from the SELECT DISTINCT.

Inspired by:

To de-duplicate a whole table, see:

Felipe Hoffa
  • 54,922
  • 16
  • 151
  • 325
  • 1
    Is there a version of this solution which works for time ingestion partitioned tables? Judging from the error I'm getting, I can't immediately see how to rewrite this. - `Omitting INSERT target column list is unsupported for ingestion-time partitioned table` – Dominic Woodman Jun 17 '22 at 10:20
4

Additional answer - for complex rows that can't use DISTINCT:

MERGE `temp.many_random` t
USING (
  # choose a single row to delete the duplicates
  SELECT a.*
  FROM (
    SELECT ANY_VALUE(a) a
    FROM `temp.many_random` a
    WHERE d='2018-10-01'
    GROUP BY d, random_int # id
  )
)
ON FALSE
WHEN NOT MATCHED BY SOURCE AND d='2018-10-01' 
  # delete the duplicates
  THEN DELETE
WHEN NOT MATCHED BY TARGET THEN INSERT ROW
Felipe Hoffa
  • 54,922
  • 16
  • 151
  • 325
  • ANY_VALUE() and GROUP BY is a great pattern to deduplicate when there are many duplicates (in my case: > 300 k). Works much better than the ROW_NUMBER() approach I used before! – Sourygna May 28 '20 at 08:20
1

You could also deduplicate for a range of partitions.

-- WARNING: back up the table before this operation
-- FOR large size timestamp partitioned table 
-- -------------------------------------------
-- -- To de-duplicate rows of a given range of a partition table, using surrage_key as unique id
-- -------------------------------------------

DECLARE dt_start DEFAULT TIMESTAMP("2019-09-17T00:00:00", "America/Los_Angeles") ;
DECLARE dt_end DEFAULT TIMESTAMP("2019-09-22T00:00:00", "America/Los_Angeles");

MERGE INTO `my_project`.`data_set`.`the_table` AS INTERNAL_DEST
USING (
  SELECT k.*
  FROM (
    SELECT ARRAY_AGG(original_data LIMIT 1)[OFFSET(0)] k 
    FROM `my_project`.`data_set`.`the_table` AS original_data
    WHERE stamp BETWEEN dt_start AND dt_end
    GROUP BY surrogate_key
  )

) AS INTERNAL_SOURCE
ON FALSE

WHEN NOT MATCHED BY SOURCE
  AND INTERNAL_DEST.stamp BETWEEN dt_start AND dt_end -- remove all data in partiion range
    THEN DELETE

WHEN NOT MATCHED THEN INSERT ROW

credit: https://gist.github.com/hui-zheng/f7e972bcbe9cde0c6cb6318f7270b67a

Jordan Arsenault
  • 7,100
  • 8
  • 53
  • 96
Hui Zheng
  • 2,394
  • 1
  • 14
  • 18