306

I have a Postgresql database on which I want to do a few cascading deletes. However, the tables aren't set up with the ON DELETE CASCADE rule. Is there any way I can perform a delete and tell Postgresql to cascade it just this once? Something equivalent to

DELETE FROM some_table CASCADE;

The answers to this older question make it seem like no such solution exists, but I figured I'd ask this question explicitly just to be sure.

Community
  • 1
  • 1
Eli Courtwright
  • 186,300
  • 67
  • 213
  • 256
  • 1
    Please see my custom function below. It's possible with certain restrictions. – Joe Love Apr 20 '16 at 20:35
  • Anyone coming here please note that in most of cases cascade delete can be dangerous. Its your DB you know the refs please keep a sequential script handy pass id to it and get things deleted. Its a bit of work initially but don't go addin on delete cascade unless you are very sure – amar Sep 11 '21 at 08:59

10 Answers10

237

No. To do it just once you would simply write the delete statement for the table you want to cascade.

DELETE FROM some_child_table WHERE some_fk_field IN (SELECT some_id FROM some_Table);
DELETE FROM some_table;
palehorse
  • 26,407
  • 4
  • 40
  • 48
  • 40
    This doesn't necessarily work as there could be other foreign keys cascading from the original cascading (recursion). You can even get into a loop where table a refers to b which refers to a. To achieve this in a general sense, see my table below, but it has some restrictions. If you have a simple table setup then try the code above, it's easier to comprehend what you're doing. – Joe Love Dec 13 '13 at 17:08
  • 3
    Simple, safe. You should run them in a single transaction if you have density inserts. – Ismail Yavuz May 09 '18 at 11:34
60

This command will delete all data from all tables that have a foreign key to the specified table, plus everything that foreign keys to those tables, and so on. Proceed with extreme caution.

If you really want DELETE FROM some_table CASCADE; which means "remove all rows from table some_table", you can use TRUNCATE instead of DELETE and CASCADE is always supported. However, if you want to use selective delete with a where clause, TRUNCATE is not good enough.

USE WITH CARE - This will drop all rows of all tables which have a foreign key constraint on some_table and all tables that have constraints on those tables, etc.

Postgres supports CASCADE with TRUNCATE command:

TRUNCATE some_table CASCADE;

Handily this is transactional (i.e. can be rolled back), although it is not fully isolated from other concurrent transactions, and has several other caveats. Read the docs for details.

bsplosion
  • 2,641
  • 27
  • 38
DanC
  • 1,844
  • 13
  • 12
  • 329
    clearly "a few cascading deletes"≠dropping all data from the table… – lensovet Feb 14 '12 at 05:57
  • 43
    This will drop all rows of all tables which have a foreign key constraint on some_table and all tables that have constraints on those tables, etc... this is potentially very dangerous. – AJP Apr 24 '13 at 18:19
  • I think the original question is problematic. It asked for a method to do a few cascading deletes but then proceeded to ask is there "Something equivalent to `DELETE FROM some_table CASCADE;`." This answer is about the latter problem but rarely the thing anybody wants to do in reality. – Mikko Rantalainen Dec 20 '18 at 11:05
  • 5
    I literally googled "DELETE FROM some_table CASCADE" because that's what I wanted to do and found this one which is most precise answer possible. Vote up from me. – Lutosław Mar 06 '19 at 13:18
  • 6
    If I ever had this use case, I'd figure out the list of tables that need truncating and and write a separate TRUNCATE for each. In order to be very explicit about all of the nuking that is being done. – bradw2k Mar 22 '19 at 20:29
  • 5
    I just want to mention I did this in production at a reasonably large startup (>100M funding) and it deleted all our users plus all their order history. I don't even know why Postgres allows this syntax. At minimum it should be named CASCADE FOOTGUN TRUNCATE ALL TABLES WITH FOREIGN KEYS. We restored from backup, but it was painful and stressful because new users kept joining and ordering so we had to do it incrementally for each of the tables involved and sort out conflicts by hand. I didn't lose my job (respect!), but I did have present a detailed post-mortem with the entire department. – Eloff Jun 21 '21 at 17:59
  • Almost wiped my production database – Sohail Shrestha Jul 06 '22 at 09:28
  • 1
    amazing for test teardown! – Ado Ren Dec 09 '22 at 06:15
  • Agreed that this is a nuclear weapon but it's also exactly what I needed to clear out and rebuild a processing database which had gotten gunked up but could be reconstructed from base data. I'd propose having a big red warning saying "this WILL delete data from all tables that have a foreign key to this table, plus everything that foreign keys to those tables, and so on". – bsplosion Dec 13 '22 at 21:44
  • 1
    To people in this thread that are running this in production without understanding it; please for the love of all that is holy - educate yourselves before running anything (especially deletions) in your prod environments! Also; use transactions and verify result before commit! – aelgn Apr 27 '23 at 09:47
40

I wrote a (recursive) function to delete any row based on its primary key. I wrote this because I did not want to create my constraints as "on delete cascade". I wanted to be able to delete complex sets of data (as a DBA) but not allow my programmers to be able to cascade delete without thinking through all of the repercussions. I'm still testing out this function, so there may be bugs in it -- but please don't try it if your DB has multi column primary (and thus foreign) keys. Also, the keys all have to be able to be represented in string form, but it could be written in a way that doesn't have that restriction. I use this function VERY SPARINGLY anyway, I value my data too much to enable the cascading constraints on everything. Basically this function is passed in the schema, table name, and primary value (in string form), and it will start by finding any foreign keys on that table and makes sure data doesn't exist-- if it does, it recursively calls itsself on the found data. It uses an array of data already marked for deletion to prevent infinite loops. Please test it out and let me know how it works for you. Note: It's a little slow. I call it like so: select delete_cascade('public','my_table','1');

create or replace function delete_cascade(p_schema varchar, p_table varchar, p_key varchar, p_recursion varchar[] default null)
 returns integer as $$
declare
    rx record;
    rd record;
    v_sql varchar;
    v_recursion_key varchar;
    recnum integer;
    v_primary_key varchar;
    v_rows integer;
begin
    recnum := 0;
    select ccu.column_name into v_primary_key
        from
        information_schema.table_constraints  tc
        join information_schema.constraint_column_usage AS ccu ON ccu.constraint_name = tc.constraint_name and ccu.constraint_schema=tc.constraint_schema
        and tc.constraint_type='PRIMARY KEY'
        and tc.table_name=p_table
        and tc.table_schema=p_schema;

    for rx in (
        select kcu.table_name as foreign_table_name, 
        kcu.column_name as foreign_column_name, 
        kcu.table_schema foreign_table_schema,
        kcu2.column_name as foreign_table_primary_key
        from information_schema.constraint_column_usage ccu
        join information_schema.table_constraints tc on tc.constraint_name=ccu.constraint_name and tc.constraint_catalog=ccu.constraint_catalog and ccu.constraint_schema=ccu.constraint_schema 
        join information_schema.key_column_usage kcu on kcu.constraint_name=ccu.constraint_name and kcu.constraint_catalog=ccu.constraint_catalog and kcu.constraint_schema=ccu.constraint_schema
        join information_schema.table_constraints tc2 on tc2.table_name=kcu.table_name and tc2.table_schema=kcu.table_schema
        join information_schema.key_column_usage kcu2 on kcu2.constraint_name=tc2.constraint_name and kcu2.constraint_catalog=tc2.constraint_catalog and kcu2.constraint_schema=tc2.constraint_schema
        where ccu.table_name=p_table  and ccu.table_schema=p_schema
        and TC.CONSTRAINT_TYPE='FOREIGN KEY'
        and tc2.constraint_type='PRIMARY KEY'
)
    loop
        v_sql := 'select '||rx.foreign_table_primary_key||' as key from '||rx.foreign_table_schema||'.'||rx.foreign_table_name||'
            where '||rx.foreign_column_name||'='||quote_literal(p_key)||' for update';
        --raise notice '%',v_sql;
        --found a foreign key, now find the primary keys for any data that exists in any of those tables.
        for rd in execute v_sql
        loop
            v_recursion_key=rx.foreign_table_schema||'.'||rx.foreign_table_name||'.'||rx.foreign_column_name||'='||rd.key;
            if (v_recursion_key = any (p_recursion)) then
                --raise notice 'Avoiding infinite loop';
            else
                --raise notice 'Recursing to %,%',rx.foreign_table_name, rd.key;
                recnum:= recnum +delete_cascade(rx.foreign_table_schema::varchar, rx.foreign_table_name::varchar, rd.key::varchar, p_recursion||v_recursion_key);
            end if;
        end loop;
    end loop;
    begin
    --actually delete original record.
    v_sql := 'delete from '||p_schema||'.'||p_table||' where '||v_primary_key||'='||quote_literal(p_key);
    execute v_sql;
    get diagnostics v_rows= row_count;
    --raise notice 'Deleting %.% %=%',p_schema,p_table,v_primary_key,p_key;
    recnum:= recnum +v_rows;
    exception when others then recnum=0;
    end;

    return recnum;
end;
$$
language PLPGSQL;
rkrzr
  • 1,842
  • 20
  • 31
Joe Love
  • 5,594
  • 2
  • 20
  • 32
  • 1
    It happens all the time especially with self referencing tables. Consider a company with different management tiers in different departments, or a generic hierarchical taxonomy. Yes, I agree that this function is not the absolute best thing since sliced bread, but it's a useful tool in the right situation. – Joe Love Mar 11 '16 at 16:08
  • If you rewrite it accept array of IDs and also generate queries which will use `IN` operator with sub-selects instead of `=` (so step to use sets logic) it would became *much* faster. – Hubbitus Apr 29 '17 at 18:39
  • 2
    Thank you for your solution. I write some tests and I needed to delete a record and I was having trouble to cascade that delete. Your function worked really well! – Fernando Camargo Jul 27 '17 at 12:42
  • @hubbitus I'm not sure how recursion would work in that scenario.. and the recursion is the real time slow-down from what I can tell. – Joe Love Jul 27 '17 at 15:09
  • 1
    @JoeLove what speed problem are you have? In that situation recursion is single correct solution on my mind. – Hubbitus Apr 22 '18 at 23:40
  • @Hubbitus It's not a "problem" per se, it's just that the recursion can vary depending on your DB setup and deleting a lot of data can be kind of inefficient because it recurses per row not per table, but there's no easy solution. For what this function does, it works well.. It's just not built for speed (IE, running in a web app).. it was built as a DBA tool. – Joe Love Jun 12 '18 at 20:13
  • @Joe Love Thank you for your solution, Your function worked very well!. I think the function can be improved by including a DISTINCT clause, because when multiple joins are made, the records can be duplicated. The line 22 could be something like select distinct kcu.table_name as foreign_table_name, – grand Apr 02 '19 at 14:41
  • I've just realized that our primary keys have multiple of attributes, the foreign keys as well. The script expects single attributes for those, doesn't it – arthur Sep 18 '19 at 15:08
  • 1
    @arthur you could probably use some version of row -> json -> text to get it done, however, I've not gone that far. Ive found through the years that a singular primary key (with potential secondary keys) is good for many reasons. – Joe Love Sep 19 '19 at 19:34
19

If I understand correctly, you should be able to do what you want by dropping the foreign key constraint, adding a new one (which will cascade), doing your stuff, and recreating the restricting foreign key constraint.

For example:

testing=# create table a (id integer primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "a_pkey" for table "a"
CREATE TABLE
testing=# create table b (id integer references a);
CREATE TABLE

-- put some data in the table
testing=# insert into a values(1);
INSERT 0 1
testing=# insert into a values(2);
INSERT 0 1
testing=# insert into b values(2);
INSERT 0 1
testing=# insert into b values(1);
INSERT 0 1

-- restricting works
testing=# delete from a where id=1;
ERROR:  update or delete on table "a" violates foreign key constraint "b_id_fkey" on table "b"
DETAIL:  Key (id)=(1) is still referenced from table "b".

-- find the name of the constraint
testing=# \d b;
       Table "public.b"
 Column |  Type   | Modifiers 
--------+---------+-----------
 id     | integer | 
Foreign-key constraints:
    "b_id_fkey" FOREIGN KEY (id) REFERENCES a(id)

-- drop the constraint
testing=# alter table b drop constraint b_a_id_fkey;
ALTER TABLE

-- create a cascading one
testing=# alter table b add FOREIGN KEY (id) references a(id) on delete cascade; 
ALTER TABLE

testing=# delete from a where id=1;
DELETE 1
testing=# select * from a;
 id 
----
  2
(1 row)

testing=# select * from b;
 id 
----
  2
(1 row)

-- it works, do your stuff.
-- [stuff]

-- recreate the previous state
testing=# \d b;
       Table "public.b"
 Column |  Type   | Modifiers 
--------+---------+-----------
 id     | integer | 
Foreign-key constraints:
    "b_id_fkey" FOREIGN KEY (id) REFERENCES a(id) ON DELETE CASCADE

testing=# alter table b drop constraint b_id_fkey;
ALTER TABLE
testing=# alter table b add FOREIGN KEY (id) references a(id) on delete restrict; 
ALTER TABLE

Of course, you should abstract stuff like that into a procedure, for the sake of your mental health.

Ryszard Szopa
  • 5,431
  • 8
  • 33
  • 43
  • 4
    In assumption that the foreign key schould prevent doing things wich makes the database inconsistent, this is not the way to deal with. You can delete the "nasty" entry now but You are leaving lots of zombie shards wich could cause problems in future – Sprinterfreak Oct 29 '15 at 02:05
  • 1
    What shards do you mean exactly? the records will be deleted via cascade there should be no inconsistency. – Pedro Borges Apr 08 '18 at 16:35
  • 1
    rather than being concerned about "nasty shards" (cascading constraints will still be consistent), I'd be MORE concerned about the cascading not going far enough-- if the deleted records require further deleted records, then those constraints will need to be altered to ensure cascading as well. (or use the function I wrote above to avoid this scenario)... One last recommendation in any case: USE A TRANSACTION so you can roll it back if it goes awry. – Joe Love Jul 26 '18 at 18:51
14

Yeah, as others have said, there's no convenient 'DELETE FROM my_table ... CASCADE' (or equivalent). To delete non-cascading foreign key-protected child records and their referenced ancestors, your options include:

  • Perform all the deletions explicitly, one query at a time, starting with child tables (though this won't fly if you've got circular references); or
  • Perform all the deletions explicitly in a single (potentially massive) query; or
  • Assuming your non-cascading foreign key constraints were created as 'ON DELETE NO ACTION DEFERRABLE', perform all the deletions explicitly in a single transaction; or
  • Temporarily drop the 'no action' and 'restrict' foreign key constraints in the graph, recreate them as CASCADE, delete the offending ancestors, drop the foreign key constraints again, and finally recreate them as they were originally (thus temporarily weakening the integrity of your data); or
  • Something probably equally fun.

It's on purpose that circumventing foreign key constraints isn't made convenient, I assume; but I do understand why in particular circumstances you'd want to do it. If it's something you'll be doing with some frequency, and if you're willing to flout the wisdom of DBAs everywhere, you may want to automate it with a procedure.

I came here a few months ago looking for an answer to the "CASCADE DELETE just once" question (originally asked over a decade ago!). I got some mileage out of Joe Love's clever solution (and Thomas C. G. de Vilhena's variant), but in the end my use case had particular requirements (handling of intra-table circular references, for one) that forced me to take a different approach. That approach ultimately became recursively_delete (PG 10.10).

I've been using recursively_delete in production for a while, now, and finally feel (warily) confident enough to make it available to others who might wind up here looking for ideas. As with Joe Love's solution, it allows you to delete entire graphs of data as if all foreign key constraints in your database were momentarily set to CASCADE, but offers a couple additional features:

  • Provides an ASCII preview of the deletion target and its graph of dependents.
  • Performs deletion in a single query using recursive CTEs.
  • Handles circular dependencies, intra- and inter-table.
  • Handles composite keys.
  • Skips 'set default' and 'set null' constraints.
TRL
  • 182
  • 1
  • 5
  • I'm getting an error: ERROR: array must have even number of elements Where: PL/pgSQL function _recursively_delete(regclass,text[],integer,jsonb,integer,text[],jsonb,jsonb) line 15 at assignment SQL statement "SELECT * FROM _recursively_delete(ARG_table, VAR_pk_col_names)" PL/pgSQL function recursively_delete(regclass,anyelement,boolean) line 73 at SQL statement – Joe Love Jan 14 '20 at 15:54
  • Hey, @JoeLove. Thanks for trying it out. Can you give me steps to reproduce? And what's your version of PG? – TRL Jan 14 '20 at 16:59
  • I'm not sure this will help. but I just created your functions and then ran the following code: select recursively_delete('dallas.vendor',1094,false) After some debugging, I find that this dies right off the bat-- meaning, it seems like it's the first call to the function, not after doing multiple things. For reference I'm running PG 10.8 – Joe Love Jan 15 '20 at 16:54
  • @JoeLove, Kindly try branch trl-fix-array_must_have_even_number_of_element (https://github.com/trlorenz/PG-recursively_delete/pull/2). – TRL Jan 21 '20 at 20:28
  • Tried that branch and it did fix the original error. Sadly, it's not any faster than my original version (which may not have been your point in writing this in the first place). I'm working on another attempt that creates duplicate foreign keys with "on delete cascade", then deleting the original record, then dropping all the newly created foreign keys, – Joe Love Jan 22 '20 at 20:20
  • Yeah, @JoeLove, recursively_delete is meant as a big hammer for occasional administrative use. Instead of speed, I was more concerned with the capability to handle a really wide, really deep, really messed-up graph, with circular dependencies both within and between tables, composite keys, etc. Also, performing all deletions in a single query sidestepped some race/consistency concerns for me, while essentially having the effect of ignoring FK constraints during the execution. – TRL Jan 22 '20 at 22:16
8

I cannot comment Palehorse's answer so I added my own answer. Palehorse's logic is ok but efficiency can be bad with big data sets.

DELETE FROM some_child_table sct 
 WHERE exists (SELECT FROM some_Table st 
                WHERE sct.some_fk_fiel=st.some_id);

DELETE FROM some_table;

It is faster if you have indexes on columns and data set is bigger than few records.

Grzegorz Grabek
  • 960
  • 7
  • 16
5

I took Joe Love's answer and rewrote it using the IN operator with sub-selects instead of = to make the function faster (according to Hubbitus's suggestion):

create or replace function delete_cascade(p_schema varchar, p_table varchar, p_keys varchar, p_subquery varchar default null, p_foreign_keys varchar[] default array[]::varchar[])
 returns integer as $$
declare

    rx record;
    rd record;
    v_sql varchar;
    v_subquery varchar;
    v_primary_key varchar;
    v_foreign_key varchar;
    v_rows integer;
    recnum integer;

begin

    recnum := 0;
    select ccu.column_name into v_primary_key
        from
        information_schema.table_constraints  tc
        join information_schema.constraint_column_usage AS ccu ON ccu.constraint_name = tc.constraint_name and ccu.constraint_schema=tc.constraint_schema
        and tc.constraint_type='PRIMARY KEY'
        and tc.table_name=p_table
        and tc.table_schema=p_schema;

    for rx in (
        select kcu.table_name as foreign_table_name, 
        kcu.column_name as foreign_column_name, 
        kcu.table_schema foreign_table_schema,
        kcu2.column_name as foreign_table_primary_key
        from information_schema.constraint_column_usage ccu
        join information_schema.table_constraints tc on tc.constraint_name=ccu.constraint_name and tc.constraint_catalog=ccu.constraint_catalog and ccu.constraint_schema=ccu.constraint_schema 
        join information_schema.key_column_usage kcu on kcu.constraint_name=ccu.constraint_name and kcu.constraint_catalog=ccu.constraint_catalog and kcu.constraint_schema=ccu.constraint_schema
        join information_schema.table_constraints tc2 on tc2.table_name=kcu.table_name and tc2.table_schema=kcu.table_schema
        join information_schema.key_column_usage kcu2 on kcu2.constraint_name=tc2.constraint_name and kcu2.constraint_catalog=tc2.constraint_catalog and kcu2.constraint_schema=tc2.constraint_schema
        where ccu.table_name=p_table  and ccu.table_schema=p_schema
        and TC.CONSTRAINT_TYPE='FOREIGN KEY'
        and tc2.constraint_type='PRIMARY KEY'
)
    loop
        v_foreign_key := rx.foreign_table_schema||'.'||rx.foreign_table_name||'.'||rx.foreign_column_name;
        v_subquery := 'select "'||rx.foreign_table_primary_key||'" as key from '||rx.foreign_table_schema||'."'||rx.foreign_table_name||'"
             where "'||rx.foreign_column_name||'"in('||coalesce(p_keys, p_subquery)||') for update';
        if p_foreign_keys @> ARRAY[v_foreign_key] then
            --raise notice 'circular recursion detected';
        else
            p_foreign_keys := array_append(p_foreign_keys, v_foreign_key);
            recnum:= recnum + delete_cascade(rx.foreign_table_schema, rx.foreign_table_name, null, v_subquery, p_foreign_keys);
            p_foreign_keys := array_remove(p_foreign_keys, v_foreign_key);
        end if;
    end loop;

    begin
        if (coalesce(p_keys, p_subquery) <> '') then
            v_sql := 'delete from '||p_schema||'."'||p_table||'" where "'||v_primary_key||'"in('||coalesce(p_keys, p_subquery)||')';
            --raise notice '%',v_sql;
            execute v_sql;
            get diagnostics v_rows = row_count;
            recnum := recnum + v_rows;
        end if;
        exception when others then recnum=0;
    end;

    return recnum;

end;
$$
language PLPGSQL;
Thomas C. G. de Vilhena
  • 13,819
  • 3
  • 50
  • 44
  • 2
    I'm going to have to look at this and see how well it works with self referencing constraints and the like. I attempted to do something similar but stopped short of getting it fully working. If your solution works for me, I'm going to implement it. This is one of many dba tools that should be packaged and put up on github or something. – Joe Love Jul 26 '18 at 18:54
  • I have medium size databases for a multi-tenant CMS (clients all share the same tables). My version (with out the "in") seems to run fairly slow to delete all traces of an old client... I'm interested in trying this out with some mockup data to compare speeds. Did you have anything you could say about the speed difference you noticed in your use case(s)? – Joe Love Aug 13 '19 at 17:26
  • For my use case I noticed a speed up in the order of 10x when using the `in` operator and sub-queries. – Thomas C. G. de Vilhena Aug 14 '19 at 18:41
  • For people coming here a couple of years after like me. This function works great but make sure to explicitly cast the schema, table name and subquery to varchar when executing the delete_cascade recursively ! `delete_cascade(rx.foreign_table_schema::varchar, rx.foreign_table_name::varchar, null, v_subquery::varchar, p_foreign_keys);` My server was spewing a `No function matches` error. – BinarSkugga Mar 28 '22 at 09:20
4

You can use to automate this, you could define the foreign key constraint with ON DELETE CASCADE.
I quote the the manual of foreign key constraints:

CASCADE specifies that when a referenced row is deleted, row(s) referencing it should be automatically deleted as well.

atiruz
  • 2,782
  • 27
  • 36
  • 2
    Although this doesn't address the OP, it's good planning for when rows with foreign keys need to be deleted. As Ben Franklin said, "an ounce of prevention is worth a pound of cure." – Jesuisme Mar 12 '17 at 04:18
  • 2
    I've found that this solution can be quite dangerous if your app deletes the a record with lots of siblings and instead of a minor error, you have permanently deleted a huge dataset. – Joe Love Mar 13 '18 at 21:34
1

The delete with the cascade option only applied to tables with foreign keys defined. If you do a delete, and it says you cannot because it would violate the foreign key constraint, the cascade will cause it to delete the offending rows.

If you want to delete associated rows in this way, you will need to define the foreign keys first. Also, remember that unless you explicitly instruct it to begin a transaction, or you change the defaults, it will do an auto-commit, which could be very time consuming to clean up.

Grant Johnson
  • 1,224
  • 10
  • 12
  • 3
    there is no way to "delete with cascade" on a table which hasn't been set up accordingly, i.e. for which the foreign key constraint has not been defined as ON DELETE CASCADE, which is what the question was originally all about. – lensovet Feb 14 '12 at 05:56
  • 4
    Grant's answer is partly wrong - Postgresql doesn't support CASCADE on DELETE queries. http://www.postgresql.org/docs/8.4/static/dml-delete.html – Fredrik Wendt Aug 08 '09 at 23:27
  • 1
    Any idea why it's not supported on the delete query? – Teifion Jul 29 '10 at 21:59
  • 2
    As an answer to *this* question, this is completely wrong. There is no way to CASCADE once. – Jeremy Oct 26 '18 at 12:43
-1

When you creating new table, you can add some constrains like UNIQUE, or NOT NULL, also you can show SQL which action it should do when you trying to DELETE rows, which has REFERENCES on another tables

CREATE TABLE company (
                id SERIAL PRIMARY KEY,
                name VARCHAR(128),
                year DATE);
CREATE TABLE employee (
                id SERIAL PRIMARY KEY,
                first_name VARCHAR(128) NOT NULL,
                last_name VARCHAR(128) NOT NULL,
                company_id INT REFERENCES company(id) ON DELETE CASCADE,
                salary INT,
                UNIQUE (first_name, last_name));

So after that you can just DELETE any rows which you need, for example:

DELETE
FROM company
WHERE id = 2;
helvete
  • 2,455
  • 13
  • 33
  • 37