Working with some update and join statements from a previous code and trying to use spark sql statements to get the same result table 1
insert into tab1
VALUES(1, 'A1', 'B1', 1),
(2, 'A2', 'B2', 0),
(3, 'A3', 'B3',1 ),
(4, 'A4', 'B4',1 ),
(5, 'A5', 'B5',0 ),
(6, 'A6', 'B6',1 )
;
table 2
insert into tab2
VALUES(1, 'A1', 'B1', 0),
(2, 'A2', 'B2', 1),
(3, 'A3', 'B3', 1),
(6, 'A6', 'B6', 0)
;
update statement
update tab1
set v1 = concat(t1.v1,t2.v1)
from tab1 t1
inner join tab2 t2 on t1.id =t2.id
where t2.v3 > 0
Result table 1
1 A2A2 B1 1
2 A2A2 B2 0
3 A2A2 B3 1
4 A2A2 B4 1
5 A2A2 B5 0
6 A2A2 B6 1
Any idea why its not
1 A1 B1 1
2 A2A2 B2 0
3 A3A3 B3 1
4 A4 B4 1
5 A5 B5 0
6 A6 B6 1
CodePudding user response:
Get rid of the tab1
in the FROM
clause and place tab2
instead. You can sort of do the join in the WHERE
clause:
UPDATE tab1 t1
SET v1 = concat(t1.v1,t2.v1)
FROM tab2 t2
WHERE t1.id =t2.id AND t2.v3 > 0;
Demo: db<>fiddle
CodePudding user response:
Turns out the previous code was running on MSSQL with the same syntax it gives the expected results after trying in a MSSQL server
1 A1 B1 1
2 A2A2 B2 0
3 A3A3 B3 1
4 A4 B4 1
5 A5 B5 0
6 A6 B6 1
in Postgresql the from_item as mentioned in the comment of this question Must not contain the same updated table!