Home > Mobile >  Query takes long time to run on postgreSQL database despite creating an index
Query takes long time to run on postgreSQL database despite creating an index

Time:07-19

Using PostgreSQL 14.3.1, I have created a database instance that is now 1TB in size. The main userlogs table is 751GB in size with 525GB used for data and 226GB used for various indexes on this table. The userlogs table currently contains over 900 million rows. In order to assist with querying this table, a separate Logdates table holds all unique dates for the user logs and there is an integer foreign key column for logdates created in userlogs called logdateID. Amongst the various indexes on the userlogs table, one of them is on logdateID. There are 104 date entries in Logdates table. When running the below query I would expect the index to be used and the 104 records to be retrieved in a reasonable period of time.

select distinct logdateid from userlogs; 

This query took a few hours to return with the data. I did an explain plan on the query and the output is as shown below.

"HashAggregate  (cost=80564410.60..80564412.60 rows=200 width=4)"
"  Group Key: logdateid"
"  ->  Seq Scan on userlogs  (cost=0.00..78220134.28 rows=937710528 width=4)"

I then issues the below command to request the database to use the index.

set enable_seqscan=off

The revised explain plan now comes as below:

"Unique  (cost=0.57..3705494150.82 rows=200 width=4)"
    "  ->  Index Only Scan using ix_userlogs_logdateid on userlogs  (cost=0.57..3703149874.49 rows=937710528 width=4)"

However, when running the same query, it still takes a few hours to retrieve the data. My question is, why should it take that long to retrieve the data if it is doing an index only scan?

The machine on which the database sits is highly spec'd: a xeon 16-core processor, that with virtualisation enabled, gives 32 logical cores. There is 96GB of RAM and data storage is via a RAID 10 configured 2TB SSD disk with a separate 500GB system SSD disk.

CodePudding user response:

There is no possibilities to optimize such queries in PostGreSQL due to the internal structure of the data storage into rows inside pages.

All queries involving an aggregate in PostGreSQL such as COUNT, COUNT DISTINCT or DISTINCT must read all rows inside the table pages to produce the result.

Let'us take a look over the paper I wrote about this problem : PostGreSQL vs Microsoft SQL Server – Comparison part 2 : COUNT performances

CodePudding user response:

It seems like your table has none of its pages set as all visible (compare pg_class.relallvisible to the actual number of pages in the table), which is weird because even insert-only tables should get autovacuumed in v13 and up. This will severely punish the index-only scan. You can try to manually vacuum the table to see if that changes things.

It is also weird that it is not using parallelization. It certainly should be. What are your non-default configuration settings?

Finally, I wouldn't expect even the poor plan you show to take a few hours. Maybe your hardware is not performing up to what it should. (Also, RAID 10 requires at least 4 disks, but your description makes it sound like that is not what you have)

Since you have the foreign key table, you could use that in your query, just testing each row that it has at least one row from the log table.

select logdateid from logdate where exists 
  (select 1 from userlogs where userlogs.logdateid=logdate.logdateid); 
  • Related