4

I'm working with a C# .NET 4.0 application, that uses ODP.NET 11.2.0.2.0 with an Oracle 11g database. The application pre-loads a few look-up tables with data, and since most have less than 20 records, the scripts run pretty quickly. However, one of the scripts has 802 records, and takes 248.671 seconds to insert the records, which seems excessive for such a small amount of data with a database that advertises fast operations with large volumes of data.

So I'm wondering, is there a faster way to insert data, via script, than the current way the script is written?

The table being inserted into is defined like so:

CREATE TABLE FileIds
(
     Id                 NUMERIC(38)                         NOT NULL
    ,Name               NVARCHAR2(50)   DEFAULT 'Unknown'   NOT NULL 
    ,FileTypeGroupId    NUMERIC(38)                         NOT NULL
    ,CONSTRAINT FK_FileIds_FileTypeGroups FOREIGN KEY ( FileTypeGroupId ) REFERENCES FileTypeGroups ( Id )
)

And the script to insert looks like the following:

BEGIN
    INSERT ALL
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1152,5)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1197,10)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1200,6)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1143,3)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1189,9)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1109,7)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1166,4)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (0,8)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1149,2)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1400,1)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1330,11)
        INTO FileIds ( Id, FileTypeGroupId ) VALUES (1000,0)
        -- 790 Records removed for example purposes.
        SELECT * FROM DUAL;
    COMMIT;
END;

The FileTypeGroups table, referenced in the Foreign Key, is pre-loaded prior to the loading of the FileIds table. There are no sequences or triggers associated with the FileIds table, and as of yet indexes have not been created for the table.

Paul
  • 387
  • 1
  • 6
  • 11
  • Looks fine to me. Have checked what's the problem through Database Tuning Advisor? – danish Jul 25 '12 at 18:19
  • No. I'm still pretty new to Oracle, and don't know anything about Database Tuning Advisor. Is it a separate app like SQL Developer, or a package that needs to be loaded into SQL Developer? – Paul Jul 25 '12 at 18:29
  • What version of Oracle? Some of the stripped down versions don't have all the tools. There has to be something else going on as 802 records even on the slowest hardware should take a fraction of a second. Make sure there's no other transactions that could be holding a lock. – JOTN Jul 25 '12 at 23:57
  • 1
    Can C# do "batched statements"? In Java they incredibly fast with the Oracle JDBC driver. –  Jul 26 '12 at 07:23
  • @JOTN According to sqlplus, the version is: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production, PL/SQL Release 11.2.0.3.0 - Production, CORE 11.2.0.3.0 Production, TNS for 64-bit Windows: Version 11.2.0.3.0 - Production, NLSRTL Version 11.2.0.3.0 - Production – Paul Jul 26 '12 at 17:16
  • @a_horse_with_no_name Yes, I believe batched statements are possible, however from what I've read so far, the performance is flaky and likely plagued with memory management problems. http://stackoverflow.com/questions/343299/bulk-insert-to-oracle-using-net/10100908#10100908 – Paul Jul 26 '12 at 17:27

1 Answers1

5

Problem

Parsing time may increase exponentially with certain types of statements, especially INSERT ALL. For example:

--Clear any cached statements, so we can consistently reproduce the problem.
alter system flush shared_pool;
alter session set sql_trace = true;

--100 rows
INSERT ALL
    INTO FileIds(Id,FileTypeGroupId) VALUES(1, 1)
    ...
    repeat 100 times
    ...
select * from dual;

--500 rows
INSERT ALL
    INTO FileIds(Id,FileTypeGroupId) VALUES(1, 1)
    ...
    repeat 500 times
    ...
select * from dual;

alter session set sql_trace = false;

Run the trace file through tkprof, and you can see the Parse time increases dramatically for a large number of rows. For example:

100 rows:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.06       0.05          0          1          0           0
Execute      1      0.00       0.00          0        100        303         100
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.06       0.05          0        101        303         100

500 rows:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1     14.72      14.55          0          0          0           0
Execute      1      0.01       0.02          0        502       1518         500
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2     14.74      14.58          0        502       1518         500

Solutions

  1. Break your large statement into several smaller statements. It's difficult to find the optimal size. On some versions of Oracle there's a magic number of rows that will cause the problem. I usually go for about 100 rows - enough to get most of the benefits of grouping statements, but low enough to avoid the parsing bug. OR...
  2. Try the insert into ... select ... from dual union all ... method instead. It usually runs much faster, although it's parsing performance may also degrade significantly with size.
  3. Upgrade Oracle. Parsing performance has improved in newer versions. I can no longer reproduce this issue in version 12.2.

Warning

Don't learn the wrong lesson from this. If you're worried about SQL performance, 99% of the time you're better off grouping similar things together instead of splitting them apart. You're doing things the right way, you just ran into a weird bug. (I searched My Oracle Support but couldn't find an official bug for this.)

Jon Heller
  • 34,999
  • 6
  • 74
  • 132
  • Wow, switching to the UNION ALL method made a huge difference (0.497 seconds). Thank you! :) Also, thank you for the warning, it's a lesson well-taken. – Paul Jul 26 '12 at 17:51