Home > Software design >  Pipelined function that uses a table accessed via DB Link
Pipelined function that uses a table accessed via DB Link

Time:12-18

I have created this pipelined function for fetching configuration from a table which is stored in a DB that I need to access via a DB link:

CREATE OR REPLACE FUNCTION fetch_config (
    process_i         IN   VARCHAR2,
    procedure_i       IN   VARCHAR2,
    sub_procedure_i   IN   VARCHAR2
) RETURN t_config_type
    PIPELINED
AS
BEGIN
    FOR r_row IN (
        SELECT
            zprocess,
            zprocedure,
            zsub_procedure,
            zcriteria,
            zfield,
            zfield2,
            zvalue_enabled,
            zenabled
        FROM
            cdc.uap_zufi_dunn_conf@rbip
        WHERE
            zprocess = process_i
            AND zprocedure = procedure_i
            AND zsub_procedure = sub_procedure_i
    ) LOOP
        PIPE ROW ( config_type(r_row.zprocess, r_row.zprocedure, r_row.zsub_procedure, r_row.zcriteria, r_row.zfield,
            r_row.zfield2, r_row.zvalue_enabled, r_row.zenabled) );
    END LOOP;
END fetch_config;

However, when trying to use the function dynamically, the below error is thrown:

BEGIN
    EXECUTE IMMEDIATE q'[
    CREATE TABLE my_table AS
        SELECT
            *
        FROM
            another_table
        WHERE
            cacont_acc IN (
                SELECT
                    zvalue_enabled
                FROM
                    TABLE ( fetch_config('GLOBAL', 'EXCLUSIONS', 'ZBUT000_ATTRIBUTES') ))
    ]'
    ;
END;

Error:

ORA-06512: at line 79
12840. 00000 -  "cannot access a remote table after parallel/insert direct load txn"
*Cause:    Within a transaction, an attempt was made to perform distributed
           access after a PDML or insert direct  statement had been issued.
*Action:   Commit/rollback the PDML transaction first, and then perform
           the distributed access, or perform the distributed access before the
           first PDML statement in the transaction.

I've tried to create a view in my local DB pointing to that table, but it fails as well. What would be the workaround for this issue?

CodePudding user response:

It's the combination of a Create Table As Select (CTAS) with a pipelined function that references a remote object that causes the error "ORA-12840: cannot access a remote table after parallel/insert direct load txn". CTAS statements always use an optimized type of write called a direct-path write, but those direct-path writes do not play well with remote objects. There are several workarounds, such as separating your statements into a separate DDL and DML step, or using a common table expression to force Oracle to run the operations in an order that works.

Direct Path Writes

The below code demonstrates that CTAS statements appear to always use direct-path writes. A regular insert would include an operation like "LOAD TABLE CONVENTIONAL", but a direct path write shows up as the operation "LOAD AS SELECT".

drop table my_table;
explain plan for create table my_table as select 1 a from dual;
select * from table(dbms_xplan.display(format => 'basic'));

Plan hash value: 2781518217
 
-----------------------------------------------------
| Id  | Operation                        | Name     |
-----------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |          |
|   1 |  LOAD AS SELECT                  | MY_TABLE |
|   2 |   OPTIMIZER STATISTICS GATHERING |          |
|   3 |    FAST DUAL                     |          |
-----------------------------------------------------

(However - I don't think CTAS uses a "real" direct path write. Using a real direct path write every time would cause data issues. There would have to be a mechanism to allow conventional writes, but nothing I tried, such as NOLOGGING, NOAPPEND, or creating relational constraints, was able to force a CTAS to use a "LOAD TABLE CONVENTIONAL" operation. I think a CTAS is really using some type of optimization halfway between conventional and direct path.)

Direct path writes are optimized for performance but come at the expense of consistency. Transactions, even the same transaction, cannot write or read from the same object before committing a direct path write. This isn't normally a problem with CTAS, because it all happens in one step. But when there's a remote database, Oracle doesn't know what kind of transactions are happening with that remote database. And accessing a remote object always creates a transaction, so as soon as Oracle calls the remote object in the pipelined function it can't tell what's going on remotely and raises "ORA-12840: cannot access a remote table after parallel/insert direct load txn".

Workarounds

Avoiding the CTAS may be the most straightforward way to prevent this error. Isolate the CTAS direct path write in a separate statement, and then use a regular insert that will use a "LOAD TABLE CONVENTIONAL" operation that works fine with database links.

    --(Add warning here about not combining the below two statements.)
    EXECUTE IMMEDIATE q'[
    CREATE TABLE my_table AS
        SELECT
            *
        FROM
            another_table
        WHERE
            1=0
    ]';

    EXECUTE IMMEDIATE q'[
    INSERT INTO my_table
        SELECT
            *
        FROM
            another_table
        WHERE
            cacont_acc IN (
                SELECT
                    zvalue_enabled
                FROM
                    TABLE ( fetch_config('GLOBAL', 'EXCLUSIONS', 'ZBUT000_ATTRIBUTES') ))
    ]';

But if you want to avoid repeating any code, and do everything in a single step, you can use a Common Table Expression (CTE).

    EXECUTE IMMEDIATE q'[
    CREATE TABLE my_table AS
        WITH configs AS
        (
            --Use CTE and MATERIALIZE hint to avoid ORA-12840.
            SELECT /*  MATERIALIZE */
                zvalue_enabled
            FROM
                TABLE ( fetch_config('GLOBAL', 'EXCLUSIONS', 'ZBUT000_ATTRIBUTES')
        )
        SELECT
            *
        FROM
            another_table
        WHERE
            cacont_acc IN (SELECT zvalue_enabled FROM configs)
    ]'
    ;

The CTE and the MATERIALIZE hint force Oracle to retrieve the results of the remote object first and store them in a temporary table. When the CTAS gets executed, it reads from the temporary table and doesn't notice the database link anymore. The execution plan will look something like below:

--------------------------------------------------------------------------------
| Id  | Operation                                | Name                        |
--------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT                   |                             |
|   1 |  TEMP TABLE TRANSFORMATION               |                             |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D6707_8AAECC0C |
|   3 |    COLLECTION ITERATOR PICKLER FETCH     | FETCH_CONFIG                |
|   4 |   LOAD AS SELECT                         | MY_TABLE                    |
...
  • Related