Home > Software engineering >  Oracle - fast insert and fast latest records lookup
Oracle - fast insert and fast latest records lookup

Time:08-24

I have a table with logs which grew in size (~100M records) to the point where querying even the latest entries takes a considerable amount of a time.

I am wondering is there a smart way to make access to latest records fast (largest PK values) while also make inserts (appends) to it fast? I do not want to delete any data if possible, actually there is already a mechanism which monthly deletes logs older than N days.

Ideally what I mean is have the query

select * from t_logs order by log_id desc fetch first 50 rows only

to run in a split second (up to reasonable row count, say 500, if that matters).

The table is defined as follows:

CREATE TABLE t_logs (
    log_id        NUMBER NOT NULL,
    method_name   VARCHAR2(128 CHAR) NOT NULL,
    msg           VARCHAR2(4000 CHAR) NOT NULL,
    type          VARCHAR2(1 CHAR) NOT NULL,
    time_stamp    TIMESTAMP(6) NOT NULL,
    user_created  VARCHAR2(50 CHAR) DEFAULT user NOT NULL
);

CREATE UNIQUE INDEX logs_pk ON t_logs ( log_id ) REVERSE;

ALTER TABLE t_logs ADD (
    CONSTRAINT logs_pk PRIMARY KEY ( log_id )
);

I am not really a DBA, so I am not familiar with all the performance tuning methods. I just use logs a lot and I was wondering if I could do something data-not-invasive to ease my pain. Up to my knowledge, what I did: tried re-computing statistics/re-analyze table (no effect), looked into query plan

-------------------------------------------
| Id  | Operation                | Name   |
-------------------------------------------
|   0 | SELECT STATEMENT         |        |
|   1 |  VIEW                    |        |
|   2 |   WINDOW SORT PUSHED RANK|        |
|   3 |    TABLE ACCESS FULL     | T_LOGS |
-------------------------------------------

I would expect query to leverage index to perform the lookup, why doesn't it? Maybe this is a reason it takes so long to find the results?

Version: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production


Mr Cave, in the accepted answer, seems to be right

alter table t_logs drop constraint log_pk;
drop index log_pk;

create unique index logs_pk on t_logs ( log_id );
alter table t_logs add (
    constraint logs_pk primary key ( log_id )
);

Queries run super fast now, plan looks as expected:

-------------------------------------------------
| Id  | Operation                     | Name    |
-------------------------------------------------
|   0 | SELECT STATEMENT              |         |
|   1 |  VIEW                         |         |
|   2 |   WINDOW NOSORT STOPKEY       |         |
|   3 |    TABLE ACCESS BY INDEX ROWID| T_LOGS  |
|   4 |     INDEX FULL SCAN DESCENDING| LOGS_PK |
-------------------------------------------------

CodePudding user response:

100 million rows isn't that large.

Why are you creating a reverse-key index for your primary key? Sure, that has the potential to reduce contention on inserts but were you really constrained by contention? That would be pretty unusual. Maybe you have an unusual environment. But my guess is that someone was trying to prematurely optimize the design for inserts without considering what that did to queries.

My wager would be that a nice, basic design would be more than sufficient for your needs

CREATE TABLE t_logs (
    log_id        NUMBER NOT NULL,
    method_name   VARCHAR2(128 CHAR) NOT NULL,
    msg           VARCHAR2(4000 CHAR) NOT NULL,
    type          VARCHAR2(1 CHAR) NOT NULL,
    time_stamp    TIMESTAMP(6) NOT NULL,
    user_created  VARCHAR2(50 CHAR) DEFAULT user NOT NULL
);

CREATE UNIQUE INDEX logs_pk ON t_logs ( log_id );

ALTER TABLE t_logs ADD (
    CONSTRAINT logs_pk PRIMARY KEY ( log_id )
);

If you can't recreate the primary key for some reason, create an index on time_stamp and change your queries to use that

CREATE INDEX log_ts ON t_logs( time_stamp );

SELECT *
  FROM log_ts
 ORDER BY time_stamp DESC
 FETCH FIRST 100 ROWS ONLY;
  • Related