I am trying to join one dataframe to another using two id's (patient_id, encounter_id) that are shared between them. Both dataframes are indexed on these ids.
Here is the LHS:
tnx_prophy=# \d diagnosis
Table "public.diagnosis"
Column | Type | Collation | Nullable | Default
------------------------------- ------ ----------- ---------- ---------
patient_id | text | | |
encounter_id | text | | |
code_system | text | | |
code | text | | |
principal_diagnosis_indicator | text | | |
date | text | | |
Indexes:
"idx_pt_enc_dx" btree (patient_id, encounter_id)
Here is the RHS:
tnx_prophy=# \d encounter
Table "public.encounter"
Column | Type | Collation | Nullable | Default
-------------- ------ ----------- ---------- ---------
encounter_id | text | | |
patient_id | text | | |
type | text | | |
enc_type | text | | |
Indexes:
"idx_pt_enc_enc" btree (patient_id, encounter_id)
The datasets are large (~500m rows?), but my UPDATE and JOIN function seems to be taking much longer than I would like. And yes, I would like to update (not just generate a temporary table)
tnx_prophy=# ALTER TABLE diagnosis ADD COLUMN enc_type text;
ALTER TABLE
tnx_prophy=# UPDATE diagnosis
tnx_prophy-# SET enc_type = encounter.enc_type
tnx_prophy-# FROM encounter
tnx_prophy-# WHERE (diagnosis.patient_id, diagnosis.encounter_id) = (encounter.patient_id, encounter.encounter_id);
Any advice on how to do this faster? Or am I messing up the syntax here somehow obvious? Thanks a ton if anyone can help!
CodePudding user response:
\i tmp.sql
CREATE TABLE diagnosis
( patient_id text
, encounter_id text
-- , code_system text
-- , code text
, principal_diagnosis_indicator text
-- , date text
);
CREATE INDEX idx_pt_enc_dx ON diagnosis (patient_id, encounter_id);
CREATE TABLE encounter
( encounter_id text
, patient_id text
, type text
, enc_type text
);
CREATE INDEX idx_pt_enc_enc ON encounter (patient_id, encounter_id);
INSERT INTO diagnosis(patient_id, encounter_id, principal_diagnosis_indicator) VALUES
(1,1, 'influenza')
,(1,1, 'cancer')
,(2,1, 'influenza')
,(2,1, 'cancer')
;
INSERT INTO encounter(patient_id, encounter_id, enc_type) VALUES
( 1,1, 'OMG')
,( 1,1, 'WTF')
,( 2,1, 'WTF')
,( 2,1, 'OMG')
;
ALTER TABLE diagnosis ADD COLUMN enc_type text;
EXPLAIN ANALYZE
UPDATE diagnosis dst
SET enc_type = src.enc_type
FROM encounter src
WHERE (dst.patient_id, dst.encounter_id) = (src.patient_id, src.encounter_id)
AND dst.enc_type IS DISTINCT FROM src.enc_type -- both columns are NULLABLE
;
SELECT * FROM diagnosis;
Result:
DROP SCHEMA
CREATE SCHEMA
SET
CREATE TABLE
CREATE INDEX
CREATE TABLE
CREATE INDEX
INSERT 0 4
INSERT 0 4
ALTER TABLE
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------
Update on diagnosis dst (cost=0.30..47.59 rows=7 width=140) (actual time=0.383..0.385 rows=0 loops=1)
-> Merge Join (cost=0.30..47.59 rows=7 width=140) (actual time=0.139..0.232 rows=8 loops=1)
Merge Cond: ((dst.patient_id = src.patient_id) AND (dst.encounter_id = src.encounter_id))
Join Filter: (dst.enc_type IS DISTINCT FROM src.enc_type)
-> Index Scan using idx_pt_enc_dx on diagnosis dst (cost=0.15..21.15 rows=520 width=134) (actual time=0.066..0.082 rows=4 loops=1)
-> Index Scan using idx_pt_enc_enc on encounter src (cost=0.15..21.15 rows=520 width=102) (actual time=0.051..0.086 rows=7 loops=1)
Planning Time: 1.278 ms
Execution Time: 0.858 ms
(8 rows)
patient_id | encounter_id | principal_diagnosis_indicator | enc_type
------------ -------------- ------------------------------- ----------
1 | 1 | cancer | WTF
1 | 1 | influenza | WTF
2 | 1 | cancer | OMG
2 | 1 | influenza | OMG
(4 rows)
Take a good look at the
Merge Join (cost=0.30..47.59 rows=7 width=140) (actual time=0.139..0.232 rows=8 loops=1)
line: 8 rows are updated, but there are only 4 records!! This happens because the search-key into your look up table is not unique
. Every record is updated twice, (and the order is undefined ...)!