I am new in Oracle and I am looking for the best way for insert thousands (maybe millions) of records into a table.
I have seen other questions and answers about this situation, but in this answer the PL/SQL code use TWO associate arrays of scalar types (PSL_INTEGER) and works as table columns, I need the same but with ONE nested table of record/complex type for insert in the table as a row.
First of all, I have this code in Node.js (TypeScript) using the oracledb package (v 5.1.0):
let data: Array<DataModel>;
// data's variable is populated with data and 'DataModel' is an interface,
// data is an array with a the exact table's structure:
// [
// { C_ONE: 'mike', C_TWO: 'hugman', C_THREE: '34', ... with other 12 columns },
// { C_ONE: 'robert', C_TWO: 'zuck', C_THREE: '34', ... with other 12 columns },
// { C_ONE: 'john', C_TWO: 'gates', C_THREE: '34', ... with other 12 columns }
// ]
let context;
try {
context = await oracledb.getConnection({
user: 'admin',
password: 'admin',
connectString: 'blabla'
});
const result = await context.execute(
// My SP
'BEGIN PACKAGE_TEST.SP_TEST_STRESS(:p_data, :p_status); END;',
{
// My JSON Array
p_data: {
type: 'PACKAGE_TEST.T_STRESS',
val: data
},
// Variable for check if all success or fails... this doesn't matters :)
p_status: {
type: oracledb.NUMBER,
val: 1,
dir: oracledb.BIND_OUT
}
},
{ autoCommit: true }
);
console.log(result);
if ((result.outBinds as { p_status: number }).p_status === 0) {
// Correct
}
else {
// Failed
}
} catch (error) {
// bla bla for errors
} finally {
if (context) {
try {
await context.close();
} catch (error) {
// bla bla for errors
}
}
}
And the PL/SQL code for my sotore procedure:
CREATE OR REPLACE PACKAGE PACKAGE_TEST
IS
TYPE R_STRESS IS RECORD
(
C_ONE VARCHAR(50),
C_TWO VARCHAR(500),
C_THREE VARCHAR(10),
C_FOUR VARCHAR(100),
C_FIVE VARCHAR(10),
C_SIX VARCHAR(100),
C_SEVEN VARCHAR(50),
C_EIGHT VARCHAR(50),
C_NINE VARCHAR(50),
C_TEN VARCHAR(50),
C_ELEVEN VARCHAR(50),
C_TWELVE VARCHAR(50),
C_THIRTEEN VARCHAR(300),
C_FOURTEEN VARCHAR(100),
C_FIVETEEN VARCHAR(300),
C_SIXTEEN VARCHAR(50)
);
TYPE T_STRESS IS VARRAY(213627) OF R_STRESS;
PROCEDURE SP_TEST_STRESS
(
P_DATA_FOR_PROCESS T_STRESS,
P_STATUS OUT NUMBER
);
END;
/
CREATE OR REPLACE PACKAGE BODY PACKAGE_TEST
IS
PROCEDURE SP_TEST_STRESS
(
P_DATA_FOR_PROCESS T_STRESS,
P_STATUS OUT NUMBER
)
IS
BEGIN
DBMS_OUTPUT.put_line('started');
BEGIN
FORALL i IN 1 .. P_DATA_FOR_PROCESS.COUNT
INSERT INTO TEST_STRESS
(
C_ONE,
C_TWO,
C_THREE,
C_FOUR,
C_FIVE,
C_SIX,
C_SEVEN,
C_EIGHT,
C_NINE,
C_TEN,
C_ELEVEN,
C_TWELVE,
C_THIRTEEN,
C_FOURTEEN,
C_FIVETEEN,
C_SIXTEEN
)
VALUES
(
P_DATA_FOR_PROCESS(i).C_ONE,
P_DATA_FOR_PROCESS(i).C_TWO,
P_DATA_FOR_PROCESS(i).C_THREE,
P_DATA_FOR_PROCESS(i).C_FOUR,
P_DATA_FOR_PROCESS(i).C_FIVE,
P_DATA_FOR_PROCESS(i).C_SIX,
P_DATA_FOR_PROCESS(i).C_SEVEN,
P_DATA_FOR_PROCESS(i).C_EIGHT,
P_DATA_FOR_PROCESS(i).C_NINE,
P_DATA_FOR_PROCESS(i).C_TEN,
P_DATA_FOR_PROCESS(i).C_ELEVEN,
P_DATA_FOR_PROCESS(i).C_TWELVE,
P_DATA_FOR_PROCESS(i).C_THIRTEEN,
P_DATA_FOR_PROCESS(i).C_FOURTEEN,
P_DATA_FOR_PROCESS(i).C_FIVETEEN,
P_DATA_FOR_PROCESS(i).C_SIXTEEN
);
EXCEPTION
WHEN OTHERS THEN
p_status := 1;
END;
P_STATUS := 0;
END;
END;
And my target table:
CREATE TABLE TEST_STRESS
(
C_ONE VARCHAR(50),
C_TWO VARCHAR(500),
C_THREE VARCHAR(10),
C_FOUR VARCHAR(100),
C_FIVE VARCHAR(10),
C_SIX VARCHAR(100),
C_SEVEN VARCHAR(50),
C_EIGHT VARCHAR(50),
C_NINE VARCHAR(50),
C_TEN VARCHAR(50),
C_ELEVEN VARCHAR(50),
C_TWELVE VARCHAR(50),
C_THIRTEEN VARCHAR(300),
C_FOURTEEN VARCHAR(100),
C_FIVETEEN VARCHAR(300),
C_SIXTEEN VARCHAR(50)
);
An intersting behavior happens with this scenario:
If I send my JSON Array with 200 rows, this works perfectly, I don't know the exact time it takes to complete successfully, but I can tell it's milliseconds.
If I send my JSON Array with 200,000 rows, this takes three or four minutes to wait, the promise is resolved and it throws me an exception of type: ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT
This happens when passing the JSON Array to the procedure parameter, it seems that when processing it it will cost too much.
- Why does this happen in the second scenario?
- Is there a limitation on the number of rows in the NESTED TABLE TYPES or is any configuration (default) with Node.js?
- Oracle suggests increasing pga_aggregate_limit but seeing it in my SQLDeveloper with "show parameter pga;" It is 3G, does it mean that the information I am sending is exceeding 3 GB? Is normal?
- Is there a more viable solution that does not affect the performance of the database?
Appreciate your help.