I'm doing an ETL with a table with 100 million records where I capture the information from TERADATA and import it into ORACLE. The process is taking too long and I want to know if there is any way to improve the performance of my query with some tuning.
MERGE INTO TABLE_A TB USING (
select t.COLUMN_A as COLUMN_A_OLD
from TABLE_B t
left outer join STAGE s
on s.COLUMN_B = t.COLUMN_B
and s.COLUMN_C = t.COLUMN_C
and s.COLUMN_D = t.COLUMN_D
and s.COLUMN_E = t.COLUMN_E
and s.COLUMN_F = to_date('yyyy-mm-dd 00:00:00','yyyy-mm-dd hh24:mi:ss')
where t.COLUMN_F = to_date('2100-12-31 00:00:00','yyyy-mm-dd hh24:mi:ss')
and s.COLUMN_C is null
) stg
on ( stg.COLUMN_A_OLD = tb.COLUMN_A )
WHEN MATCHED THEN
UPDATE SET
TB.COLUMN_F = sysdate,
TB.COLUMN_G = $$PARAMETER ,
TB.COLUMN_H = sysdate;
commit;
Thanks.
CodePudding user response:
Have you tried something like this?
alter session enable parallel dml;
merge /* parallel(10) */ into ….
The PARALLEL hint will enable read-parallelism, but to also enable write-parallelism you will need to either run the above ALTER SESSION command or use the hint /* ENABLE_PARALLEL_DML */.