1

Using Oracle, is there a way to insert values from one table into another table, then take an identity value from the inserted table and update a column in the original?

TABLE_1 is empty

ID    VALUE
-----------

Values from TABLE_2 ...

ID    VALUE
-----------
0     Val 1
0     Val 2
0     Val 3

...get inserted into TABLE_1 (with an Identity column)

ID    VALUE
-----------
1     Val 1
2     Val 2
3     Val 3

And then updates TABLE_2 with the IDs

ID    VALUE
-----------
1     Val 1
2     Val 2
3     Val 3
APC
  • 144,005
  • 19
  • 170
  • 281
JonathanPeel
  • 743
  • 1
  • 7
  • 19

2 Answers2

2

You need to get procedural for such a requirement. This solution uses SELECT ... FOR UPDATE which locks the source table, to prevent another session nabbing the record we want to give the new ID. It also gives us the WHERE CURRENT OF syntax, which makes it easy to identify the record we need to update.

This solution supposes the existence of a sequence for populating the identity column. There are other options available to us (including auto-increments in 12C) but the RETURNING clause is the key to snagging the new value.

declare
    cursor c2 is
        select * from table2
        for update of id;
    r2 c2%rowtype;
    new_id t1.id%type;
begin
    open c2;
    loop
        fetch c2 in r2;
        exit when c2%notfound;
        /* new record */
        insert into t1
        values (t1_seq.nextval, t2.value)
        returning t1.id into new_id;
        /* update existing record with id*/
        update t2
        set id = new_id
        where current of c2;
    end loop;
    commit;
end;
/

This solution is Row-By-Row" it is the easiest way to make sure that the new T1.ID gets applied to the correct row in T2. If T1 is small and/or this is a on-off exercise that's probably fine. But if performance is a concern there are tunings available.

APC
  • 144,005
  • 19
  • 170
  • 281
  • Thank you, Cursor was the way I was going to go, I wanted to know if I could avoid it, or if there is a fancy MERGE I could do. – JonathanPeel May 03 '17 at 11:10
  • Also, thank you very much for the `for update`. I did not know about that in a cursor, and I am guessing it would have had me stuck for a while without it. – JonathanPeel May 03 '17 at 11:10
  • I do have a doubt, does a trigger would be a better approach? i.e. On the main table a trigger to insert values in the second table and then use another trigger on the second table to update the value in the main table, correct me if I am wrong. – Jacob May 03 '17 at 11:14
  • @user75ponic - A trigger could work for inserting new records in `table_1` but doesn't help for existing records. So it seems like a new question. – APC May 03 '17 at 11:16
1

If in table 2 are a lot of rows i recommend you to use bulk collect. It will help you to improve the performance on database. Like this:

declare 
type type_table2 is table of table2%rowtype index by binary_integer;
vt_table2 type_table2;
cursor cur_table2 is select * from table2;
begin
open cur_table2;
  loop
  fetch cur_table2 bulk collect into vt_table2 limit 500;
    for i in 1..vt_table2.count loop
      begin
        insert into table1
        values(i, vt.table2(i).value);
        update table2
        set id = i
        where current of cur_table2; 
      exception when other then
      excp := SQLERRM;
      dbms_output.put_line('Error:'||excp);
      end;
    end loop;
  exit when cur_table%notfound;
  end loop;
close cur_table2;
commit;
exception when other then
  excp := SQLERRM;
  dbms_output.put_line('Error:'||excp);
end;
cport93
  • 64
  • 1
  • 9
  • Thank you. I will run both types and see if there is any performance improvement. If you don't mind me asking, what is it the bulk collect does here? – JonathanPeel May 04 '17 at 19:31
  • > BULK COLLECT: SELECT statements that retrieve multiple rows with a single fetch, improving the speed of data retrieval You need have an array to save the content and search locally. `fetch cur_table2 bulk collect into vt_table2 limit 500;` – cport93 May 04 '17 at 19:41
  • So basically, get all records for the cursor at once, and then loop through? – JonathanPeel May 04 '17 at 19:42
  • > Specifically, memory for collections is stored in the program global area (PGA), not the system global area (SGA). SGA memory is shared by all sessions connected to Oracle Database, but PGA memory is allocated for each session. Thus, if a program requires 5MB of memory to populate a collection and there are 100 simultaneous connections, that program causes the consumption of 500MB of PGA memory, in addition to the memory allocated to the SGA [link] (http://www.oracle.com/technetwork/issue-archive/2008/08-mar/o28plsql-095155.html) – cport93 May 04 '17 at 19:49
  • @JonathanPeel Yes, take all records in bloks of 500, then analizes the data according your procedures. – cport93 May 04 '17 at 21:04