I have a 2 tables with the following structures:
Table_1: id name age
Table_2: id table_1_id city state
I need to perform bulk insert into Table 1:
INSERT INTO Table_1(name, age)
VALUES ('A', 12), ('B', 13), ('C', 14)
RETURNING id
The ids returned from the first insert are to be used in the below query:
INSERT INTO Table_2(table_1_id, city, state)
VALUES (first_row_id, 'Austin', 'Texas'),
(second_row_id, 'Dallas', 'Texas'),
(third_row_id, 'Houston', 'Texas')
How can I achieve the same using CTE or any other efficient way with least amount of code in Postgres?
CodePudding user response:
I have to admit, I've never considered a construct like this, but for what it's worth I believe this will do what you seek in a single statement, unless I've missed something:
with ids as (
INSERT INTO Table_1(name, age)
VALUES ('A', 12), ('B', 13), ('C', 14)
RETURNING id
),
rowz as (
select id, row_number() over (order by id) as rn
from ids
)
insert into table_2 (table_1_id, city, state)
select
r.id, v.city, v.state
from
rowz r
join (values (1, 'Austin', 'Texas'), (2, 'Houston', 'Texas'), (3, 'Dallas', 'Texas')) v (id, city, state) on
r.rn = v.id
Just my $0.02, but I think it would be easier to follow and more scalable if instead you wrapped this in some code and did it that way (pick your favorite programming language or use PLPGSQL).
The flaw I see with this is ideally you want to provide this:
A 12 Austin Texas
B 13 Houston Texas
And have the script do the rest. Here you have to mess with the individual values, and unless it's always 3 you can't really take advantage of parameters (at least not easily).