Home > database >  Oracle merge into the query are getting slower and slower, you know what the reason,
Oracle merge into the query are getting slower and slower, you know what the reason,

Time:09-17

 
The MERGE INTO ADMIN. JHJ_SBJLXX t1
USING (select count (*) c from JHJ_SBJLXX t2 where t2. PJH='9 b35882f2f) ON x (x.c & gt; 0)
The when matched then
Update the set t1. CKH='4013501', t1. JHSJ=to_date (' 2019-05-14 16:41:37 ', '- dd yyyy - mm hh24: mi: ss'), t1. YWBJSJ=', t1. SFJH='1' where t1. PJH='9 b35882f2f'
When not matched then
Insert (t1) SBMC, t1 SBBH, t1. XZQH, t1. XZQHMC, t1. SXBM, t1. SXMC, t1, PJH, t1. JHRXM, t1. JHRZJHM, t1. JHRLXDH, t1. QHSJ, t1. SFJH, t1. JHSJ, t1. CKH, t1. BMMC, t1. Ywlx, t1. Ywbjsj) VALUES (' 2 floor numeral machine ', '2 l01', '420105007000', 'office of wuhan city, hubei province the five piers street', '99', 'real estate sign', '9 b35882f2f', 'expericnce', '412823199212103659', ' ', to_date (' 16:41:00 2019-05-14 ', '- dd yyyy - mm hh24: mi: ss'), '1', to_date (' 16:41:37 2019-05-14 ', '- dd yyyy - mm hh24: mi: ss'), '4013501', 'public security', '0', ')

Here is my code, using the bulk insert excutemany, a total of 1000 data, sometimes used for 10 minutes, 30 minutes sometimes are not complete, this is why ah,
 
Jh_insert_sql="" "MERGE INTO ADMIN. JHJ_SBJLXX t1
USING (select count (*) c from ADMIN. JHJ_SBJLXX t2 where t2. PJH=: pjh1 and t2. XZQH=: xzqh1) ON x (x.c & gt; 0)
The when matched then
Update the set t1. CKH=: ckh1, t1. JHSJ=to_date (: jhsj1, 'yyyy - mm - dd hh24: mi: ss'), t1. YWBJSJ=: ywbjsj1, t1. SFJH=: sfjh1 where t1. PJH=: PJH
When not matched then
Insert (t1) SBMC, t1 SBBH, t1. XZQH, t1. XZQHMC, t1. SXBM, t1. SXMC, t1, PJH, t1. JHRXM, t1. JHRZJHM, t1. JHRLXDH, t1. QHSJ, t1. SFJH, t1. JHSJ, t1. CKH, t1. BMMC, t1. Ywlx, t1. Ywbjsj) VALUES (SBMC, : SBBH, : XZQH, : XZQHMC, : SXBM, : SXMC, : PJH, : JHRXM, : JHRZJHM, : JHRLXDH, to_date (: QHSJ, 'yyyy - mm - dd hh24: mi: ss'), : SFJH, to_date (: JHSJ, 'yyyy - mm - dd hh24: mi: ss'), : CKH, : BMMC, : ywlx, : ywbjsj) ", "
"Count +=1
Article 300 # every time insert data
If the count & gt;=30:
Try:
Hyzw_mysql_cursor. Executemany (jh_insert_sql insert_jh_list)
MIT ()
hyzw_mysql_conn.comPrint (' execution '+ STR (3 * num) +' a ')
Except the Exception as e:
Print (e)
Print (insert_jh_list)
Insert_jh_list=[]
The count=0
Num +=1
If the count & gt; 0:
Hyzw_mysql_cursor. Executemany (jh_insert_sql insert_jh_list)
MIT ()
hyzw_mysql_conn.com

CodePudding user response:

You execute the merge is this batch instead of insert,

Try to every 300 executive executemany insert data into a temporary table, then use a merge processing 300 data,

CodePudding user response:

Before I also encountered similar problems, I run in the test environment for more than an hour, or no results; But is over ten minutes in a production environment,
I analyze the reason is that this is associated with the configuration of the server, cache insufficient resources, will lead to the IO number increase, also can affect execution efficiency, according to this way of thinking to consider, you look at what your problem is?

CodePudding user response:

reference 1st floor nayi_224 response:
you execute the merge is this batch instead of insert,

Try to every 300 executive executemany insert data into a temporary table, then use a merge processing 300 data,

Merge into this statement is PJH updates, there's no insert, and insert into and update syntax is the same,

CodePudding user response:

refer to the second floor whhhhh1991 response:
I also encountered similar problems, before I run in the test environment for more than an hour, there is still no results; But is over ten minutes in a production environment,
I analyze the reason is that this is associated with the configuration of the server, cache insufficient resources, will lead to the IO number increase, also can affect execution efficiency, according to this way of thinking to consider, you look at what your problem is?

Is my test database table before the store was empty, production environment before the table of 80 w data,

CodePudding user response:

Library configuration, different running speed can send far very far far

CodePudding user response:

With the target table is more and more big, the changes in plants could also slow the speed of the merge, indexes on tables, the more the more will affect the speed of data insert, if you want to keep some, the speed of the merge partition insertion is a can consider the scheme, of course, each partition of data volume stability,