I have more than 1 million insert queries to be executed in Oracle SQL Developer, which takes a lot of time. Is there any way around to optimize this.
-
You have the queries or you have the data of 10lakh rows? Your database is in Oracle SQL Developer or MySQL? Where is the current data? – Chetan Oct 16 '18 at 11:09
-
5Put the data in a file and load the data from a file. – Gordon Linoff Oct 16 '18 at 11:09
-
@ChetanRanpariya I have 10lakh queries and database is Oracle SQL Developer – pheww Oct 16 '18 at 11:16
-
Possible duplicate of [Execute scripts by relative path in Oracle SQL Developer](https://stackoverflow.com/questions/24002623/execute-scripts-by-relative-path-in-oracle-sql-developer) – Mick Mnemonic Oct 16 '18 at 11:46
-
I will suggest you to put all the inserts in a .sql and copy over to the database server and run it from the Sql+ command prompt. – Fact Oct 16 '18 at 12:15
5 Answers
SQL Developer is the wrong tool to run a 1,000,000 row-by-row inserts.
So is SQL*Plus for that matter.
If you can't write a program to do the inserts using a loop or a cursor or some pl/sql bulk collects, then do what @marmite-bomber suggests - write your data out to a flat delimited text file, and setup a SQL*Loader scenario.
Now, you CAN use SQL Developer to do this.
Point to your delimited text file.
Map everything up, and then use this IMPORT method.
Once it's done, you'll have a set of bash or cmd scripts, that you will need to execute. But first, you'll need an Oracle Client on your machine - that's where the sqlldr program lives.
It's orders of magnitude faster than running a 1,000,000 individual inserts. And your DBA won't hate you.
Using the SQLDev wizard, you can have it up and going in just a few minutes.
The PROOF: I tested a simple scenario - my findings were (details here): 10,753 records inserted per second for SQL*Loader 342 records inserted per second for SQL Developer
SQL*Loader was 3,144% faster.

- 20,522
- 6
- 37
- 120
-
Have you actually _measured_ that running 1M inserts as a one time operation through SQLDeveloper's script runner utility is _that much_ slower compared to this approach which requires also an Oracle Client install and probably setting up `tnsnames.ora` etc? I mean, OP _can_ do the job with the current tools they have. – Mick Mnemonic Oct 16 '18 at 15:55
-
@MickMnemonic yes, i have. sqlldr could load 1M records in a few seconds, esp if you look at direct path, parallel, and other performance minded features sqlldr brings to the table. – thatjeffsmith Oct 16 '18 at 15:58
-
@MickMnemonic if you're going to be routinely doing data loads of any size, taking the time to setup the proper tools will be worth the time. if this is a one and done, then sure, go ahead and run your script with SQLDev. – thatjeffsmith Oct 16 '18 at 15:59
-
So how long did it take with SQLDeveloper? And I understand that SQLLoader is really useful for "real" data migrations/conversions. – Mick Mnemonic Oct 16 '18 at 16:01
-
@MickMnemonic in a scenario i just ran today, SQL*loader was 3,144% faster than SQL Developer. – thatjeffsmith Oct 16 '18 at 17:54
Running single row INSERT
s is not feasible for such number of rows.
So one possible way is to preprocess the script, extraction only the CSV
data.
Here a simple example
Insert into TAB(COL1,COL2,COL3) values ('1','xxx',to_date('16-10-2018 15:13:49','DD-MM-YYYY HH24:MI:SS'));
Insert into TAB(COL1,COL2,COL3) values ('2','zzzzz',to_date('06-10-2018 15:13:49','DD-MM-YYYY HH24:MI:SS'));
remove all not relevant parts to get
'1','xxx','16-10-2018 15:13:49'
'2','zzzzz','06-10-2018 15:13:49'
and load this file using SQL*Loader or external table. The performance will be fine.

- 19,886
- 4
- 26
- 53
I have more than 1 million insert queries to be executed in Oracle SQL Developer
There's an overhead to executing any SQL statement. You are paying that tax 1000000 times. In additions some IDEs instantiate a separate tab for the outcome of each SQL statement executed in a worksheet (when run in one click,). I can't remember whether SQL Developer does that, but if it does then that's another tax you're paying 1000000 times.
Two possible optimizations.
Squidge the single-row inserts into set operations by select values from dual
:
insert into your_table
select blah_seq.nextval, q.txt, q.val from (
select 'WHATEVER' as txt, 42 as val from dual union all
select 'AND SO ON' as txt, 23 as val from dual union all
...
);
This will reduce the overhead. It is lot of editing work to do this.
Alternatively, use a bulk loading option such as SQL*Loader or external tables to load the data. If you have a programmer's editor with regex support (like Notepad++) then it's relatively simple to strip out the insert
syntax and convert the values
clause into CSV rows.
A third approach would be to go back top the source which produced the million insert statements and ask them to provide the data in a more manageable format, such as a data pump export.

- 144,005
- 19
- 170
- 281
-
Some care should be taken to avoid [ORA-02287: sequence number not allowed here](https://stackoverflow.com/questions/228221/how-can-i-insert-multiple-rows-into-oracle-with-a-sequence-value) in `UNION ALL` query. – Marmite Bomber Oct 16 '18 at 17:09
-
1@MarmiteBomber - thanks for spotting that hasty pseudo-code before lunch bloomer ;) – APC Oct 16 '18 at 17:15
another suggestion is to do validation that you do not have any dups on possible PK columns. Drop the constraints before insert and put them back in once done. Keep that alter script handy. You can do this as part of same script or different script. Again, if you dont check for dups and go to put the PKs back on expect error; then you be chasing down rows with dups and deleting that data.

- 533
- 5
- 17
You could put chunks of your data into a collection and bulk insert
Also, delete all indexes other than the primary key. Create the indexes after the data load
could be a little dangerous but depending on quality of data you could also remove foreign keys and then re creating them once the data was imported. You would have to be sure your import didn't break the FK relationships
declare
type my_tab is table of mytable%rowtype index by binary_integer;
a_imp my_tab;
begin
-- do stuff to populate a_imp from presumably a text file? or select statement
-- you could chunk your population of the array and bulk insert say 10,000 rows at a time
-- loop through the holding array and import any remaining data
forall ix in 1 .. a_imp.count
insert into [tabel_name] values a_imp(ix);
commit;
end;

- 98
- 1
- 11