Home > Enterprise >  Delete duplications in JSON file
Delete duplications in JSON file

Time:12-31

I am trying to reedit json file to print only subgroups that has any attributes marked as "change": false.

Json below:

{"group":{
    "subgroup1":{
        "attributes":[
        {
            "change":false,
            "name":"Name"},
            {
            "change":false,
            "name":"SecondName"},
            ],
        "id":1,
        "name":"MasterTest"},
    "subgroup2":{
        "attributes":[
            {
            "change":true,
            "name":"Name"
            },
            {
            "change":false,
            "name":"Newname"
            }
            ],
        "id":2,
        "name":"MasterSet"},
}}
    

I was trying to use command:

cat test.json | jq '.group[] | select (.attributes[].change==false)

which produce needed output but with duplicates. Can anyone help here? Or shall I use different command to achieve that result?

CodePudding user response:

.attributes[] iterates over the attributes, and each iteration step produces its own result. Use the any filter which aggregates multiple values into one, in this case a boolean with the meaning of "at least one":

.group[] | select(any(.attributes[]; .change==false))
{
  "attributes": [
    {
      "change": false,
      "name": "Name"
    },
    {
      "change": false,
      "name": "SecondName"
    }
  ],
  "id": 1,
  "name": "MasterTest"
}
{
  "attributes": [
    {
      "change": true,
      "name": "Name"
    },
    {
      "change": false,
      "name": "Newname"
    }
  ],
  "id": 2,
  "name": "MasterSet"
}

Demo

CodePudding user response:

Looks to me like the duplicate is NOT a duplicate, but a condition arising from a nested sub-grouping, which gives the appearance of a duplicate. You should look to see if there is a switch to skip processing sub-groups when the upper-level meets the condition, thereby avoiding the perceived duplication.

  • Related