So I have this plpgsql function (running PG 14.5 on UBUNTU 18.04 with 8GB Ram) which returns 200 rows just fine:
db=# explain (analyze,buffers) Select * from f_client_getlistasshown('{"limit":"200","startdate":"2014-01-01","enddate":"2100-01-01","showRequiresActionFromTaxadvisor":false}');
--------------------------------------------------------------------------------------------------------------------------------
Function Scan on f_client_getlistasshown (cost=0.25..10.25 rows=1000 width=400) (actual time=69.515..69.529 rows=200 loops=1)
Buffers: shared hit=8939 dirtied=1
Planning Time: 0.066 ms
Execution Time: 70.282 ms
(4 rows)
Now I repeat this query 5 times, each time the query returns the result fast. But then on the 6th attempt:
Function Scan on f_client_getlistasshown (cost=0.25..10.25 rows=1000 width=400) (actual time=8790.305..8790.319 rows=200 loops=1)
Buffers: shared hit=2147651
Planning Time: 0.034 ms
Execution Time: 8790.351 ms
As you can see I suddenly ran out of buffers and the execution time is terrible. shared_buffers is set to 2GB. I don't see the problem if I just execute the query the function calls internally so I didn't bother showing it here. What could be causing this?
CodePudding user response:
It is known issue. First 5 execution of embedded SQL in PL/pgSQL uses custom plans (this is plan explicitly optimized for current arguments). After this some heuristic choose if customs plans will be generated or one generic plan will be used. In you case, the generic plan of some query in your function doesn't work well.
The most simply solution is just force custom plans for your function like
CREATE OR REPLACE FUNCTION f_client_getlistasshown(...)
RETURNS ...
AS $$
...
LANGUAGE plpgsql
SET plan_cache_mode TO force_custom_plan;