0

trying to group datas of same id in a table to get less entries.

screenshot

for exemple, id 1734815 can have {3209782, 3234332}

To do that : we can for exemple make a loop :

for ad_id in SELECT DISTINCT(ad_id) in table:
    rows = SELECT duplicate_id FROM table WHERE ad_id = f{ad_id}
    ...
    INSERT INTO new_table(ad_id, _duplicate_id) VALUES (ad_id, {rows});

Functional but think is not good : heavy and slow.

Is it possible to do that only with SQL ? Maybe with a sub request ? Thank you

Anthony Hervy
  • 1,201
  • 2
  • 7
  • 14
  • Why would you want to store the duplicates as comma-separated-values? – Salman A Sep 22 '21 at 12:57
  • to get less rows, to many duplicates, think is better to store an uniq ID and all values in only 1 row @SalmanA – Anthony Hervy Sep 22 '21 at 15:08
  • 1
    What's even better is to normalise your data, so if there's a one-to-many relationship between records, you end up with a separate table mapping the `ad_id` and `duplicate_id` values to each other. If rather, you're wanting to extract this data this way for presentational purposes, then you should look into techniques using `STUFF` and `FOR XML PATH` like [this](https://stackoverflow.com/questions/273238/how-to-use-group-by-to-concatenate-strings-in-sql-server) or [this](https://stackoverflow.com/questions/44918658/sql-server-join-rows-into-comma-separated-list). – 3N1GM4 Sep 22 '21 at 15:34

0 Answers0