Home > database >  Simple statement why super slow? Each master genuflect is begged!
Simple statement why super slow? Each master genuflect is begged!

Time:10-06

Statement is:

Select a. *, b.o rder_item_id, b.s ervice_offer_id
The from TEM_KD_XY1 a
The join CRMDB. L_order_item b
On a.o ffer_comp_inst_id=b.o rder_item_obj_id
Where b. inish_time & gt; Sysdate - 365
And b.l an_id=3

The execution plan is:


In addition a.o ffer_comp_inst_id=b.o rder_item_obj_id conditions of these two fields are indexed, ask everybody to help me analyze! Thank you very much!

CodePudding user response:

The two tables you are how much data is there? In addition, generate a trace file and have a look

CodePudding user response:

reference 1st floor ghx287524027 response:
you these two tables have how much data is there? In addition, to generate a trace file and have a look


A table of 400 million records, a table more than 100000 records

CodePudding user response:

Suggestions:
First, in the from, the small table as the base table, which is on the back,
Second: open the trace see detailed execution plan generated way:
 alter session set events' 10053 trace name context forever, level 1; - open the 10053 event 
Explain the for specific SQL; - explain SQL statement
Alter session set events' 10053 trace name context off '. - shut down 10053

Look at the location of the trace file:
- the DBA to run the following statement
The select c. alue | | '/' | | d.i nstance_name | | '_ora_' | | a.s pid | | 'TRC' | | case
The when e.v alue is not null then
'_' | | e.v alue
End the trace
The from v $process a, v $session b and v $parameter c, v $instance d, v $parameter e
Where a.a DDR=p. addr
And b.a udsid=userenv (' sessionid ')
And c.n ame='user_dump_dest'
And e.n ame='tracefile_identifier';

CodePudding user response:

Table TEM_KD_XY1 a search all over the table, on the field offer_comp_inst_id indexed?

CodePudding user response:

You said you in a.o ffer_comp_inst_id indexed, but execution plan didn't take the index, TEM_KD_XY1 index were analyzed after? The field value repetition rate?
400 million. CRMDB l_order_item table have? Whether to have appropriate indexes on finish_time? If I could use to partition optimization?

CodePudding user response:

reference 5 floor zengjc reply:
you said you in a.o ffer_comp_inst_id indexed, but execution plan didn't take the index, TEM_KD_XY1 index were analyzed after? The field value repetition rate?
400 million. CRMDB l_order_item table have? Whether to have appropriate indexes on finish_time? If I could use to partition optimization?

Found a temporary table TEM_KD_XY1 is full table to participate more, so whether the table index doesn't make sense,
Is there a partition size table? When two table, if you can't use partition, estimates that efficiency is not fast,
In addition, the running efficiency of SQL and SQL "is a simple" no relationship

CodePudding user response:

refer to 6th floor zengjc response:
Quote: refer to the fifth floor zengjc reply:

You said you in a.o ffer_comp_inst_id indexed, but execution plan didn't take the index, TEM_KD_XY1 index were analyzed after? The field value repetition rate?
400 million. CRMDB l_order_item table have? Whether to have appropriate indexes on finish_time? If I could use to partition optimization?

Found a temporary table TEM_KD_XY1 is full table to participate more, so whether the table index doesn't make sense,
Is there a partition size table? When two table, if you can't use partition, estimates that efficiency is not fast,
In addition, the running efficiency of SQL and SQL "is a simple" no link


Finish_time index, I checked the CRMDB. L_order_item the 400 million record table no partitions

CodePudding user response:

How long it will take a check now? How to return the result?

CodePudding user response:

You returned data also pretty much
Lan_id indexed
Can consider to lan_id, finish_time combination index
In addition, considering the table partitioning, try the PARALLEL

CodePudding user response:

refer to the eighth floor jycjyc response:
how long it will take a check now? How to return the result?

All the query out for four hours, return to hundreds of thousands of records

CodePudding user response:

  • Related