Home > Back-end >  Why CLOB slower than VARCHAR2 in Oracle?
Why CLOB slower than VARCHAR2 in Oracle?

Time:01-05

Currently, we have a table containing a varchar2 column with 4000 characters, however, it became a limitation as the size of the 'text' being inserted can grow bigger than 4000 characters, therefore we decided to use CLOB as the data type for this specific column, what happens now is that both the insertions and selections are way too slow compared to the previous varchar2(4000) data type. We are using Python combined with SqlAlchemy to do both the insertions and the retrieval of the data. In simple words, the implementation itself did not change at all, only the column data type in the database.

Does anyone have any idea on how to tweak the performance?

CodePudding user response:

There are two kinds of storage for CLOB's

  1. in row

The clob is stored like any other column in the row. This can only be done for clob up to a certain size (approx 4k). Clobs larger than this will stored in a separate segment (the "lobsegment")

  1. out of row

The clob is always stored out of the row in the lobsegment

You can which is being used for your table by checking USER_LOBS.

It is possible, particularly in the first 'in row' instance that your table consume more blocks for the "normal" rows because of the interspersed lob data, and hence takes longer to scan.

See here: https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:9536389800346757758

If your LOB is stored separately, you have to navigate a separate indexing architecture and additional data files to get to the data. This will slow things down because of the additional background operations and disk I/O involved. There isn't much you can do about that except minimize the number of operations that access the LOB data, or get faster storage media.

CodePudding user response:

You could also ask your DBA if possible to upgrade the DB to max_string_size=EXTENDED, then the max VARCHAR2 size would be 32K.

  • Related