Im trying to join two datasets in AWS glue
Table 1(alias af):
id | data | created |
---|---|---|
1 | string 1 | 2020-02-10 |
2 | string 2 | 2020-02-11 |
3 | string 3 | 2020-02-12 |
Table 2 (alias mp):
id | data | data2 | created | foreign_key |
---|---|---|---|---|
1 | string 1 | json string | 2020-02-10 | 2 |
2 | string 2 | json string | 2020-02-11 | 3 |
3 | string 3 | json string | 2020-02-12 | 3 |
What i want to do is get all rows from table 1 and select the first row from table 2 that matches the foreign key.
This is what I have currently after going through a few questions i found that i need to wrap the query with an aggregate function to let spark know that only 1 element will match this subquery.
select af.id,af.data
(select first(mp.data)
from mp
where af.id= mp.foreign_key
) as alias1,
(select first(mp.data2)
from mp
where af.id= mp.foreign_key
) as alias2
from af
having alias 1 is not null and alias2 is not null
But this is giving me the following error:
ParseException: mismatched input 'first' expecting {')', ',', '-'}(line 3, pos 15)
Any help will be appreciated!
CodePudding user response:
Ive found a solution that works for my use case. Comment above was right the SQL was funky before.
Select af.*, mp.*
from af join
(select mp.*, row_number() over (partition by mp.fid order by mp.created_at) as seqnum
from mp
) mp
on af.id= mp.fid and seqnum = 1;