I'm trying to merge / reduce many JSON objects and somehow I'm not getting the expected result.
I'm only interested in getting all keys, the values and the number of items inside arrays are irrelevant.
file1.json
:
{
"customerId": "xx",
"emails": [
{
"address": "[email protected]",
"customType": "",
"type": "custom"
},
{
"address": "[email protected]",
"primary": true
},
{
"address": "[email protected]"
}
]
}
{
"id": "654",
"emails": [
{
"address": "[email protected]",
"primary": true
}
]
}
The desired output is a JSON object with all possible keys from all input objects. The values are irrelevant, any value from any input object is OK. But all keys from input objects must be present in output object:
{
"emails": [
{
"address": "[email protected]", <--- any existing value works
"customType": "", <--- any existing value works
"type": "custom", <--- any existing value works
"primary": true <--- any existing value works
}
],
"customerId": "xx", <--- any existing value works
"id": "654" <--- any existing value works
}
I tried reducing it, but it misses many of the keys in the array:
$ jq -s 'reduce .[] as $item ({}; . $item)' file1.json
{
"customerId": "xx",
"emails": [
{
"address": "[email protected]",
"primary": true
}
],
"id": "654"
}
The structure of the objects contained in file1.json
is unknown, so the solution must be agnostic of any keys/values and the solution must not assume any structure or depth.
Is it possible to fix this somehow considering how jq
works? Or is it possible to solve this issue using another tool?
PS: For those of you that are curious, this is useful to infer a schema that can be created in a database. Given an arbitrary number of JSON objects with an arbitrary structure, it's easy to create a single JSON squished/merged/fused structure that will "accommodate" all JSON objects.
BigQuery is able to autodetect a schema, but only 500 lines are analyzed to come up with it. This presents problems if objects have different structures past that 500 line mark.
With this approach I can squish a JSON Lines file with 1000000s of objects into one line that can be then imported into BigQuery with the autodetect
schema flag and it will work every time since BigQuery only has one line to analyze and this line is the "super-schema" of all the objects. After extracting the autodetected schema I can manually fine tune it to make sure types are correct and then recreate the table specifying my tuned schema:
$ ls -1 users*.json | wc --lines
3672
$ cat users*.json > users-all.json
$ cat users-all.json | wc --lines
146482633
$ jq 'squish' users-all.json > users-all-squished.json
$ cat users-all-squished.json | wc --lines
1
$ bq load --autodetect users users-all-squished.json
$ bq show schema --format=prettyjson users > users-schema.json
$ vi users-schema.json
$ bq rm --table users
$ bq mk --table users --schema=users-schema.json
$ bq load users users-all.json
[Some options are missing or changed for readability]
CodePudding user response:
Here is a solution that produces the expected result in the sample example, and seems to meet all the stated requirements. It is similar to one proposed by @pmf on this page.
jq -n --stream '
def squish: map(if type == "number" then 0 else . end);
reduce (inputs | select(length==2)) as [$p, $v] ({}; setpath($p|squish; $v))
'
Output
For the example given in the Q, the output is:
{
"customerId": "xx",
"emails": [
{
"address": "[email protected]",
"customType": "",
"type": "custom",
"primary": true
}
],
"id": "654"
}
CodePudding user response:
As @peak has pointed out, some aspects are underspecified. For instance, what should happen with .customerId
and .id
? Are they always the same across all files (as suggested by the sample files provided)? Do you want the items of the .emails
array just thrown into one large array, or do you want to have them "merged" by some criteria (e.g. by a common value in their .address
field)? Here are some stubs to start from:
- Simply concatenate the
.emails
arrays and take all other parts from the first file:
jq 'reduce inputs as $in (.; .emails = $in.emails)' file*.json
# or simpler
jq '.emails = [inputs.emails[]]' file*.json
{
"emails": [
{
"address": "[email protected]"
},
{
"address": "[email protected]",
"customType": "",
"type": "custom"
},
{
"address": "[email protected]"
},
{
"address": "[email protected]",
"primary": true
},
{
"address": "[email protected]"
},
{
"address": "[email protected]"
},
{
"address": "[email protected]",
"primary": true
},
{
"address": "[email protected]"
}
],
"customerId": "xx",
"id": "654"
}
- Merge the objects in the
.emails
array by a common value in their.address
field, with latter values overwriting former values for other fields with colliding names, and discard all other parts from the files:
jq -n 'reduce inputs.emails[] as $e ({}; .[$e.address] = $e) | map(.)' file*.json
[
{
"address": "[email protected]"
},
{
"address": "[email protected]",
"customType": "",
"type": "custom"
},
{
"address": "[email protected]"
},
{
"address": "[email protected]",
"primary": true
},
{
"address": "[email protected]"
}
]
- If you are only interested in a list of unique field names for a given address, regardless of the counts and values used, you can also go with:
jq -n '
reduce inputs.emails[] as $e ({}; .[$e.address][$e | keys_unsorted[]] = 1)
| map_values(keys)
'
{
"[email protected]": [
"address"
],
"[email protected]": [
"address",
"customType",
"type"
],
"[email protected]": [
"address"
],
"[email protected]": [
"address",
"primary"
],
"[email protected]": [
"address"
]
}
CodePudding user response:
The structure of the objects contained in
file1.json
is unknown, so the solution must be agnostic of any keys/values and the solution must not assume any structure or depth.
You can use the --stream
flag to break down the structure into an array of paths and values, discard the values part and make the paths unique:
jq --stream -nc '[inputs[0]] | unique[]' file*.json
["customerId"]
["emails"]
["emails",0,"address"]
["emails",0,"customType"]
["emails",0,"primary"]
["emails",0,"type"]
["emails",1,"address"]
["emails",2]
["emails",2,"address"]
["emails",2,"primary"]
["emails",3]
["emails",3,"address"]
["id"]
Trying to build a representation of this, similar to any of the input files, comes with a lot of caveats. For instance, how would you represent in a single structure if one file had .emails
as an array of objects, and another had .emails
as just an atomic value, say, a string. You would not be able to represent this plurality without introducing new, possibly ambiguous structures (e.g. putting all possibilities into an array).
Therefore, having a list of paths could be a fair compromise. Judging by your desired output, you want to focus more on the object structure, so you could further reduce complexity by discarding the array indices. Depending on your use case, you could replace them with a single value to retain the information of the presence of an array, or discard them entirely:
jq --stream -nc '[inputs[0] | map(numbers = 0)] | unique[]' file*.json
["customerId"]
["emails"]
["emails",0]
["emails",0,"address"]
["emails",0,"customType"]
["emails",0,"primary"]
["emails",0,"type"]
["id"]
jq --stream -nc '[inputs[0] | map(strings)] | unique[]' file*.json
["customerId"]
["emails"]
["emails","address"]
["emails","customType"]
["emails","primary"]
["emails","type"]
["id"]
CodePudding user response:
The following program meets these two key requirements:
- "all keys from input objects must be present in output object";
- "the solution must be agnostic of any keys/values and the solution must not assume any structure or depth."
The approach is the same as one suggested by @pmf, and for the example given in the Q, produces results that are very similar to the one that is shown:
jq -n --stream '
def squish: map(select(type == "string"));
reduce (inputs | select(length==2)) as [$p, $v] ({};
setpath($p|squish; $v))
'
With the given input, this produces:
{
"customerId": "xx",
"emails": {
"address": "[email protected]",
"customType": "",
"type": "custom",
"primary": true
},
"id": "654"
}