Home > Back-end >  Despite the existence of relevant indices, PostgreSQL query is slow
Despite the existence of relevant indices, PostgreSQL query is slow

Time:09-22

I have a table with 3 columns:

  • time (timestamptz)
  • price (numeric(8,2))
  • set_id (int)

The table contains 7.4M records. I've created an simple index for time and an index for set_id. I want to run the following query:

select * from test_prices where time BETWEEN '2015-06-05 00:00:00 00' and '2020-06-05 00:00:00 00';

Depsite my indices, the query takes 2 minutes and 30 seconds. See explain analze stats: enter image description here

What do I miss here? Why is this query so slow and how can I improve?

CodePudding user response:

According to your explain plan, the row is returning 1.6 million rows out of 4.5 million. That means that a significant portion of rows are being returned.

Postgres wisely decides that a full table scan is more efficient than using an index, because there is a good chance that all the data pages will need to be read anyway.

It is surprising that you are reporting 00:02:30 for the query. The explain is saying that the query completes in about 1.4 seconds -- which seems reasonable.

I suspect that the elapsed time is caused by the volume of data being returned (perhaps the rows are very wide), a slow network connection to the database, or contention on the database/server.

CodePudding user response:

Your query selects two thirds of the table. A sequential scan is the most efficient way to process such a query.

Your query executes in under 2 seconds. It must be your client that takes a long time to render the query result (pgAdmin is infamous for that). Use a different client.

  • Related