Home > database >  A large amount of oracle insert data
A large amount of oracle insert data

Time:09-24

Oracle using stored procedures while gathering data need to query data from the view and inserted into the partition table, probably five views, each data about the data of 1000 w, I use the insert into table (... The select... Insert this way, only 2 to 3 minutes, the second way to use the cursor, batch to read data from the cursor to insert again,
Dbms_output. Put_line (' start time: '| | sysdate);
The open bras_cur;
Loop
The fetch bras_cur bulk collect into resource_tab limit 50000;
Exit the when resource_tab. Count=0;
For I in resource_tab. First.. Resource_tab. Last
Loop
Insert into XXX this way takes about 15 minutes,
Why the first way is better than the second kind of high efficiency, so much for the great god, what situation with insert into the select
What the occasion with a second bulk insert
Or in this case what can I do to improve efficiency

CodePudding user response:

Limit is set to 1000-5000 or so in forall way below insert into is certainly faster than the first one

CodePudding user response:

Okay, I have a try

CodePudding user response:

For this way, just like others lend you 10000 dollars, and you, still, even a also a dollar, you record, still owe me 9999, also a again, you still owe me 9998,


CodePudding user response:

reference wmxcn2000 reply: 3/f
, for this way like others lend you 10000 dollars, also you, still, even a also a dollar, you record, still owe me 9999, also a again, you still owe me 9998,

+ 1
If the target list is empty, the create table... As select * from... faster

CodePudding user response:

A large number of inserting data to consider good enough disk space, table space, enough whether to indexing, etc

CodePudding user response:

1000 w, also dare so operation, does not fear the transaction log is full

CodePudding user response:

refer to 6th floor wandier response:
1000 w, still dare so operation, not afraid of the transaction log is full, please


Number of barns must level or even hundreds of millions of table common ~

You won't have much room for optimization of the insert statement itself, append/append_values tips + table properties nologging is routine

CodePudding user response:

refer to 7th floor minsic78 response:
Quote: refer to the sixth floor wandier response:

1000 w, also dare so operation, not afraid of any transaction log full


Number of barns must level or even hundreds of millions of table common ~

You won't have much room for optimization of the insert statement itself, append/append_values tips + table properties nologging is routines

Table data derived, the new table, not indexed, then import the new table, finally indexed, so better

CodePudding user response:

refer to the eighth floor wandier response:
Quote: refer to 7th floor minsic78 response:
Quote: refer to the sixth floor wandier response:

1000 w, also dare so operation, not afraid of any transaction log full


Number of barns must level or even hundreds of millions of table common ~

You won't have much room for optimization of the insert statement itself, append/append_values tips + table properties nologging is routines

Derivation of the table data, a new table, not indexed, then import the new table, finally indexed, so better


If the condition allows, indeed no index is the best, the index of the influence of the insert is too big
  • Related