Home > Mobile >  Identical SELECT vs DELETE query creates different query plans with vastly different execution time
Identical SELECT vs DELETE query creates different query plans with vastly different execution time

Time:06-28

I am trying to speed up a delete query that appears to be very slow when compared to an identical select query:

Slow delete query:
https://explain.depesz.com/s/kkWJ

delete from processed.token_utxo
where token_utxo.output_tx_time >= (select '2022-03-01T00:00:00 00:00'::timestamp with time zone)
  and token_utxo.output_tx_time < (select '2022-03-02T00:00:00 00:00'::timestamp with time zone)
  and not exists (
    select 1
    from public.ma_tx_out
    where ma_tx_out.id = token_utxo.id
  )

Fast select query: https://explain.depesz.com/s/Bp8q

select * from processed.token_utxo
where token_utxo.output_tx_time >= (select '2022-03-01T00:00:00 00:00'::timestamp with time zone)
  and token_utxo.output_tx_time < (select '2022-03-02T00:00:00 00:00'::timestamp with time zone)
  and not exists (
    select 1
    from public.ma_tx_out
    where ma_tx_out.id = token_utxo.id
  )

Table reference:

create table processed.token_utxo (
  id bigint,
  tx_out_id bigint,
  token_id bigint,
  output_tx_id bigint,
  output_tx_index int,
  output_tx_time timestamp,
  input_tx_id bigint,
  input_tx_time timestamp,
  address varchar,
  address_has_script boolean,
  payment_cred bytea,
  redeemer_id bigint,
  stake_address_id bigint,
  quantity numeric,
  primary key (id)
);

create index token_utxo_output_tx_id on processed.token_utxo using btree (output_tx_id);
create index token_utxo_input_tx_id on processed.token_utxo using btree (input_tx_id);
create index token_utxo_output_tx_time on processed.token_utxo using btree (output_tx_time);
create index token_utxo_input_tx_time on processed.token_utxo using btree (input_tx_time);
create index token_utxo_address on processed.token_utxo using btree (address);
create index token_utxo_token_id on processed.token_utxo using btree (token_id);

Version: PostgreSQL 13.6 on x86_64-pc-linux-gnu, compiled by Debian clang version 12.0.1, 64-bit

Postgres chooses different query plans which results in drastically different performance. I'm not familiar enough with Postgres to understand why it makes this decision. Hoping there is a simple way to guide it towards a better plan here.

CodePudding user response:

I am not exactly sure what triggers the switch of query plan between SELECT and DELETE, but I do know this: the subqueries returning a constant value are actively unhelpful. Use instead:

SELECT *
FROM   processed.token_utxo t
WHERE  t.output_tx_time >= '2022-03-01T00:00:00 00:00'::timestamptz  -- no subquery
AND    t.output_tx_time <  '2022-03-02T00:00:00 00:00'::timestamptz  -- no subquery
AND    NOT EXISTS (SELECT FROM public.ma_tx_out m WHERE m.id = t.id)   
DELETE FROM processed.token_utxo t
WHERE t.output_tx_time >= '2022-03-01T00:00:00 00:00'::timestamptz
AND   t.output_tx_time <  '2022-03-02T00:00:00 00:00'::timestamptz
AND   NOT EXISTS (SELECT FROM public.ma_tx_out m WHERE m.id = t.id)

As you can see in the query plan, Postgres comes up with a generic plan for yet unknown timestamps:

Index Cond: ((output_tx_time >= $0) AND (output_tx_time < $1))

My fixed query allows Postgres to devise a plan for the actual given constant values. If your column statistics are up to date, this allows for more optimization according to the number of rows expected to qualify for that time interval. The query plan will change to:

Index Cond: ((output_tx_time >= '2022-03-01T00:00:00 00:00'::timestamp with time zone) AND (output_tx_time < '2022-03-02T00:00:00 00:00'::timestamp with time zone))

And you will see different row estimates, that may result in a different query plan.

CodePudding user response:

Why it comes up with different plans is relatively easy to explain. First, the DELETE cannot use parallel queries, so the plan which is believed to be more parallel-friendly is more favored by the SELECT rather than the DELETE. Maybe that restriction will be eased in some future version. Second, the DELETE cannot use an index-only-scan on ma_tx_out_pkey, like the pure SELECT can--it would use an index scan instead. This too will make the faster plan appear less fast for the DELETE than it does for the SELECT. These two factors combined are apparently enough to get it switch plans. We have already seen evidence of the first factor, You can probably verify this 2nd factor by setting enable_seqscan to off and seeing what plan the DELETE chooses then, and if it is the nested loop, verifying that the last index scan is not index-only.

But of course the only reason those factors can make the decision between plans differ is because the plan estimates were so close together in the first place, despite being so different in actual performance. So what explains that false closeness? That is harder to determine with the info we have (it would if you had done EXPLAIN (ANALYZE, BUFFERS) with track_io_timing turned on).

One possibility is that the difference in actual performance is illusory. Maybe the nested loop is so fast only because all the data it needs is in memory, and the only reason for that is that you executed the same query repeatedly with the same parameters as part of your testing. Is it still so fast if you change the timestamps params, or clear both the PostgreSQL buffers and the file cache between runs?

Another possibility is that your system is just poorly tuned. For example, if your data is on SSD, then the default setting of random_page_cost is probably much too high. 1.1 might be a more reasonable setting than 4.

Finally, your setting of work_mem is probably way too low. That results in the hash using an extravagant number of batches: 8192. How much this effects the performance is hard predict, as it depends on your hardware, your kernel, your filesystem, etc. (Which is maybe why the planner does not try to take it into account). It is pretty easy to test, you can increase the setting of work_mem locally (in your session) and see if it changes the speed.

Much of this analysis is possible only based on the fact that your delete doesn't actually find any rows to delete. If it were deleting rows, that would make the situation far more complex.

  • Related