54

I tried searching here for a similar solution but didn't see one so I was wondering what is the best way to accomplish the following.

I have a table with 17 million + rows all have a unique ID. We have recently created a new table that will be used in conjunction with the previous table where the foreign key of the new table is the unique id of the old table.

For ex.
Table 1 - id, field1, field2, field3... table 2 - table1.id, field1 ...

The problem is since we are migrating this into a live environment, we need to back fill table 2 with a row containing the id from table 1 for each row in table 1. ex, table 1 - 1, test, null table 2 now needs to have: 1, null, ... and so on for each row that is in table1. The main issue is that the ids are not all sequential in table 1 so we will have to read from table 1 and then insert based of the id of found into table 2.

Is there any easier way to go about this?

Also to clarify, table 2 will be new data and the only thing that it will contain from table 1 is the id to keep the foreign key relationship

Also this is sql server 2000

starball
  • 20,030
  • 7
  • 43
  • 238
obj63
  • 543
  • 1
  • 4
  • 6

4 Answers4

56

If I understand correctly, you want one record in table2 for each record in table1. Also I believe that apart from the reference to table1, table2 should initially contain blank rows.

So assuming

table1 (ID, field1, field2, ...)
table2 (ID, table1_ID, fieldA, fieldB,...)
-- where table1_ID is a reference to ID of table1

After creating table2 you can simply run this insert statement

insert into table2(table1_ID) 
select ID from table1
kristof
  • 52,923
  • 24
  • 87
  • 110
20

I am not sure I am exactly following you, but would something like this work for you?

INSERT INTO table2 ( SELECT field1, field2, field3... FROM table1 )

If I am understanding correctly you want a record in table2 for every record in table1. This will do just that. Just match up your fields in the select in the right order, and specify constants for any fields in table2 that you don't have in table1.

HTH. Let me know if I am not understanding and Ill try to help again.

Ryan Guill
  • 13,558
  • 4
  • 37
  • 48
  • 1
    Does this syntax work in SQL server? When trying it, I'm getting an error that says "Error near SELECT" – Trevor Jul 29 '13 at 14:19
  • yes, this should work for sql server too. Create a question with your syntax thats throwing an error and send a link if you want. – Ryan Guill Aug 02 '13 at 12:33
  • 15
    In order to make it work in SQL Server, I had to write : INSERT INTO table2 (field1, field2, field3) SELECT field1, field2, field3 FROM table1 – ConnorsFan Jun 10 '14 at 18:45
4

You need to read this article.

What are the most common SQL anti-patterns?

The main issue is that the ids are not all sequential in table 1 so we will have to read from table 1 and then insert based of the id of found into table 2

Yes, look at my answer in the above article and write a key-walking loop using Item #2.

Make sure when you write the insert statement, you provide a fieldlist - as I say in Item #1.

Community
  • 1
  • 1
Amy B
  • 108,202
  • 21
  • 135
  • 185
-1

With that many rows you may run into issues with transaction log space, and length of time running large insert transactions.

If run time is a constraint I'd seriously recommend using Bcp (or what ever tool is applicable depending on the platform)

Select out the id's from the original table, use that to build a Bcp file for the extension table, then Bcp it in.

You many find it more performant to Bcp in files of 10,000 records instead of one humungus file with 17,000,000 rows.

Also, you can do this in the back ground before go live, and write a t-sql job to pick up and that may have been inserted after you took the snapshop of id's.

Binary Worrier
  • 50,774
  • 20
  • 136
  • 184