Problem description
My intention is to execute the following action:
Given a table with new data
with one or more dates:
And a target table with historical data
like the one below:
I would like to replace the range of dates (in this example it's only one day 17/10/2022) in the historical data
so that the result would be something like the image below:
Tentative
To reach this result my first implementation was to use the MERGE INTO
Databricks SQL clause below:
MERGE INTO historical_data
USING new_data
ON historical_data.Date = new_data.Date
WHEN MATCHED
UPDATE SET *
WHEN NOT MATCHED
THEN INSERT *
But this code raises an error like that:
UnsupportedOperationException: Cannot perform Merge as multiple source rows matched and attempted to modify the
same target row in the Delta table in possibly conflicting ways.
The error is self-explained and the only way that I have found to solve the problem was to implement a two steps SQL.
My current solution
First, delete the range in the target table and then insert the new content.
Here you can see the implementation:
--DELETE THE RANGE FROM THE TARGET TABLE
DELETE FROM historical_data
WHERE
historical_data.Date >= (SELECT MIN(new_data.Date) FROM new_data) AND
historical_data.Date <= (SELECT MAX(new_data.Date) FROM new_data);
-- INSERT THE NEW DATA INTO THE HISTORICAL TABLE
INSERT INTO TABLE historical_data
SELECT * FROM new_data;
Drawbacks and the main question
The drawback of this implementation is that it does not execute the action atomically, instead, it is done in two steps. This behavior is not expected by me because in case the script breaks for some reason there is always a possibility to have a table with the deleted data and missing the new data. I am looking for a way to solve that. Does someone know how to help me?
CodePudding user response:
There's no way around limitation of MERGE.
If your data volume is small you can just overwrite entire table.
INSERT OVERWRITE historical_data
SELECT Date, DocNum, Value
FROM historical_data
WHERE Date NOT IN (SELECT Date FROM new_data)
UNION ALL
SELECT Date, DocNum, Value
FROM new_data;
Or, if the volume is huge, but number of distinct dates is not extreme, partition the table by date. This allows you to overwrite only specific partitions in dynamic partition overwrite mode and makes your query as simple as:
INSERT OVERWRITE historical_data
SELECT Date, DocNum, Value
FROM new_data;