Home > Software engineering >  Query slows down 5 fold after copying DB (on the same computer!)
Query slows down 5 fold after copying DB (on the same computer!)

Time:07-06

In a location based app,there's a specific query which has to run fast:

SELECT count(*) FROM users
  WHERE earth_box(ll_to_earth(40.71427000, -74.00597000), 50000) @> ll_to_earth(latitude, longitude)

However, when after copying the database by using Postgres' tools:

pg_dump dummy_users > dummy_users.dump
createdb slow_db
psql slow_db < dummy_users.dump

the query takes 2.5 seconds instead of 0.5 seconds on slow_db!!

The planner chooses a different route in slow_db, eg Explain analyze on slow_db:

"Aggregate  (cost=10825.18..10825.19 rows=1 width=8) (actual time=2164.396..2164.396 rows=1 loops=1)"
"  ->  Bitmap Heap Scan on users  (cost=205.45..10818.39 rows=2714 width=0) (actual time=26.188..2155.680 rows=122836 loops=1)"
"        Recheck Cond: ('(1281995.9045467733, -4697354.822067326, 4110397.4955141144),(1381995.648489849, -4597355.078124251, 4210397.23945719)'::cube @> (ll_to_earth(latitude, longitude))::cube)"
"        Rows Removed by Index Recheck: 364502"
"        Heap Blocks: exact=57514 lossy=33728"
"        ->  Bitmap Index Scan on distance_index  (cost=0.00..204.77 rows=2714 width=0) (actual time=20.068..20.068 rows=122836 loops=1)"
"              Index Cond: ((ll_to_earth(latitude, longitude))::cube <@ '(1281995.9045467733, -4697354.822067326, 4110397.4955141144),(1381995.648489849, -4597355.078124251, 4210397.23945719)'::cube)"
"Planning Time: 1.002 ms"
"Execution Time: 2164.807 ms"

explain analyze on the origin db:

"Aggregate  (cost=8807.01..8807.02 rows=1 width=8) (actual time=239.524..239.525 rows=1 loops=1)"
"  ->  Index Scan using distance_index on users  (cost=0.41..8801.69 rows=2130 width=0) (actual time=0.156..233.760 rows=122836 loops=1)"
"        Index Cond: ((ll_to_earth(latitude, longitude))::cube <@ '(1281995.9045467733, -4697354.822067326, 4110397.4955141144),(1381995.648489849, -4597355.078124251, 4210397.23945719)'::cube)"
"Planning Time: 3.928 ms"
"Execution Time: 239.546 ms"

For both tables there's an index on the location which was created in the exact same way:

CREATE INDEX 
   distance_index ON users USING gist (ll_to_earth(latitude, longitude))

I've tried to run maintenance tools (analyze\vaccum etc) before and after running that query, with or without the index, doesn't help!

Both DBS run on the exact same machine (so same postgres server,postgres dist,configuration). Data on both DBS is the same (one single table), and isn't changing. The Postgres version = 12.8.

psql's \l output for those databases:

                              List of databases
    Name     |  Owner   | Encoding | Collate | Ctype |   Access privileges   
------------- ---------- ---------- --------- ------- ----------------------- 
 dummy_users | yoni     | UTF8     | en_IL   | en_IL | 
 slow_db     | yoni     | UTF8     | en_IL   | en_IL | 

What is going on?

(Thanks to Laurenz Albe) - after SET enable_bitmapscan = off; and SET enable_seqscan = off; on the slow database, ran the query again here is the EXPLAIN (ANALYZE, BUFFERS) output:

"Aggregate  (cost=11018.63..11018.64 rows=1 width=8) (actual time=213.544..213.545 rows=1 loops=1)"
"  Buffers: shared hit=11667 read=110537"
"  ->  Index Scan using distance_index on users  (cost=0.41..11011.86 rows=2711 width=0) (actual time=0.262..207.164 rows=122836 loops=1)"
"        Index Cond: ((ll_to_earth(latitude, longitude))::cube <@ '(1282077.0159892815, -4697331.573647572, 4110397.4955141144),(1382076.7599323571, -4597331.829704497, 4210397.23945719)'::cube)"
"        Buffers: shared hit=11667 read=110537"
"Planning Time: 0.940 ms"
"Execution Time: 213.591 ms"

CodePudding user response:

Manual VACUUM / ANALYZE after restore

After restoring a new database, there are no column statistics yet. Normally, autovacuum will kick in eventually, but since "data [...] isn't changing", autovacuum wouldn't be triggered.

For the same reason (data isn't changing), I suggest to run this once after restoring your single table:

VACUUM (ANALYZE, FREEZE) users;

You might as well run FREEZE for a table that's never changed. (FULL isn't necessary, since there are no dead tuples in a freshly restored table.)

Explanation for the plan change

With everything else being equal, I suspect at least two major problems:

  1. Bad column statistics
  2. Bad database configuration (the more severe problem)

See:

In the slow DB, Postgres expects rows=2714, while it expects rows=2130 in the fast one. The difference may not seem huge, but may well be enough to tip Postgres over to the other query plan (that turns out to be inferior).

Seeing that Postgres actually finds rows=122836, either estimate is bad. The one in the slow DB is actually less bad. But the bitmap scan turns out to be slower than the index scan, even with many more qualifying rows than expected. (!) So your database configuration is most probably way off. The main problem typically is the default random_page_cost of 4, while a realistic setting for fully cached read-only table is much closer to 1. Maybe 1.1 to allow for some additional cost. There are a couple other settings that encourage index scans. Like effective_cache_size. Start here:

Estimates are just that: estimates. And column statistics are also just that: statistics. So not exact but subject to random variation. You might increase the statistics target to increase the validity of column statistics.

Cheap random reads favor index scans and discourage bitmap index scans.
More qualifying rows favor a bitmap index scan. Less favor an index scan. See:

  • Related