I have a table where the segment field was filled incorrectly as 'None', how can I do without an update and insert data from the same table with correctly filled segments using joins?
Can match using unique fields phone_number, name, patronym. It is also necessary that the date of one client must be the largest, so that the chance of filling the segment correctly is greater.
Example of table:
summ | phone | name | patronym | DATE | STATUS | SEGMENT |
---|---|---|---|---|---|---|
12654 | 8706124512 | Steve | Alikhanov | 20.07.2022 | DONE | None |
12654 | 8706124512 | Steve | Alikhanov | 10.06.2022 | IN PROCESS | 1-2 NB |
52165 | 8706124512 | Steve | Alikhanov | 09.01.2021 | NOT DONE | 2-3 NB |
According to my junior thinking, need to create 2 identical tables, where in the first one there will be all data with an incorrectly filled field, and in the second all unique values with a correctly filled segment field. From the main table, delete all data with an incorrectly filled segment and, using the joins of two test tables, insert the already corrected version of the data.
CodePudding user response:
You can try these query below (I suppose than the key of segment are "summ" field):
-- Record to Insert
insert into mytable
with mytable as (
select 12654 as summ, 8706124512 as phone, 'Steve' as name, 'Alikhanov' as patronym, '20.07.2022' as DATE1, 'DONE' as STATUS, 'None' as SEGMENT from dual union all
select 12654 as summ, 8706124512 as phone, 'Steve' as name, 'Alikhanov' as patronym, '10.06.2022' as DATE1, 'IN PROCESS' as STATUS, '1-2 NB' as SEGMENT from dual union all
select 52165 as summ, 8706124512 as phone, 'Steve' as name, 'Alikhanov' as patronym, '09.01.2021' as DATE1, 'NOT DONE' as STATUS, '2-3 NB' as SEGMENT from dual
),
test_table_1 as (
--the first one there will be all data with an incorrectly filled field
select summ, phone, name, patronym, date1, status, segment
from mytable
where segment = 'None'
),
test_table_2 as (
--the second all unique values with a correctly filled segment field
select summ, phone, name, patronym, max(date1), status, segment
from mytable test_table2
where segment <> 'None'
group by summ, phone, name, patronym, status, segment
)
select test_table_1.summ, test_table_1.phone, test_table_1.name, test_table_1.patronym, test_table_1.date1, test_table_2.status, test_table_2.segment
from test_table_1,
test_table_2
where test_table_1.summ = test_table_2.summ
and test_table_1.phone = test_table_2.phone
and test_table_1.name = test_table_2.name
and test_table_1.patronym = test_table_2.patronym;
SUMM PHONE NAME PATRONYM DATE1 STATUS SEGMEN
---------- ---------- ----- --------- ---------- ---------- ------
12654 8706124512 Steve Alikhanov 20.07.2022 IN PROCESS 1-2 NB
-- Record to delete
delete from mytable
where segment = 'None'
with mytable as (
select 12654 as summ, 8706124512 as phone, 'Steve' as name, 'Alikhanov' as patronym, '20.07.2022' as DATE1, 'DONE' as STATUS, 'None' as SEGMENT from dual union all
select 12654 as summ, 8706124512 as phone, 'Steve' as name, 'Alikhanov' as patronym, '10.06.2022' as DATE1, 'IN PROCESS' as STATUS, '1-2 NB' as SEGMENT from dual union all
select 52165 as summ, 8706124512 as phone, 'Steve' as name, 'Alikhanov' as patronym, '09.01.2021' as DATE1, 'NOT DONE' as STATUS, '2-3 NB' as SEGMENT from dual
);
I hope that I understand your problem. In case negative please specify me better the output of record that you should to insert.
Thank you
CodePudding user response:
I answer about previous comment:
delete from mytable d
where d.summ in (
-- have two or more duplicate rows (summ is primary keys)
select b.summ
from mytable b
group by b.summ
having count(*) >= 2
)
and d.rowid not in (
-- list rowid of rows to not delete (unique number with maximum date1)
select x.codrowid
from (
-- extract the first rowid that having maximum date1
select min(rowid), a.summ, max(date1)
from mytable a
where a.summ in (
-- have two or more duplicate rows (summ is primary keys)
select b.summ
from mytable b
group by b.summ
having count(*) >= 2
)
group by a.summ
) x
);
In case of more duplicates rows with same date1, query takes the first founded.
Thank you.