I have a Array of Hashes that have JSONs with duplicated fields and I want to delete the duplicated ones:
[ {
"code" : "32F",
"lon" : 0.963335,
"fint" : "2022-05-03T13:00:00",
"prec" : 0.0,
},{
"code" : "32F",
"lon" : 0.963335,
"fint" : "2022-05-03T13:00:00",
"prec" : 0.0,
},{
"code" : "90X",
"lon" : 0.963335,
"fint" : "2022-05-03T13:00:00",
"prec" : 0.0,
}]
This is the wished output:
[{
"code" : "32F",
"lon" : 0.963335,
"fint" : "2022-05-03T13:00:00",
"prec" : 0.0,
},{
"code" : "90X",
"lon" : 0.963335,
"fint" : "2022-05-03T13:00:00",
"prec" : 0.0,
}]
Any ideas?
Thanks!
CodePudding user response:
First you have wrong syntax
"code" : "32F",
-- you don't need whitespace here
Right variant is "code": "32F",
And you even don't need quotes here. Just code: "32F",
To delete duplicates from array -- use uniq!
Be careful
ary = [1, 1]
ary.uniq! # => [1]
ary # => [1]
ary = [1, 2]
ary.uniq! # => nil
ary # => [1, 2]
Or use uniq
without bang to return new array
ary = [1, 1]
ary.uniq # => [1]
ary # => [1, 1]
ary = [1, 2]
ary.uniq # => [1, 2]
ary # => [1, 2]
In your case
ary =
[{
code: "32F",
lon: 0.963335,
fint: "2022-05-03T13:00:00",
prec: 0.0,
},{
code: "32F",
lon: 0.963335,
fint: "2022-05-03T13:00:00",
prec: 0.0,
},{
code: "90X",
lon: 0.963335,
fint: "2022-05-03T13:00:00",
prec: 0.0,
}]
ary.uniq!
# => [{:code=>"32F", :lon=>0.963335, :fint=>"2022-05-03T13:00:00", :prec=>0.0}, {:code=>"90X", :lon=>0.963335, :fint=>"2022-05-03T13:00:00", :prec=>0.0}]
CodePudding user response:
Try uniq from the enumerables module. It should work natively for your use case. It's available by default with a ruby array.
CodePudding user response:
use .uniq
this should work for you.
NB: Your array of hashes having additional space after each key. You should remove it.