I'm currently evaluating the use of zod in my application and have a small problem when having to parse an object that can contain optional keys. I'm using .passthrough to allow the keys to stay in the object but would like to custom validate the keys or at least make sure that the key names and types are valid. The .catchall only allows to specify a type of all optional keys but I would require to custom validate each optional key.
import {z} from 'zod';
// mandatory user information
const user = z.object({
id: z.number(),
name: z.string(),
});
// additional keys like:
// string: key in the format /^add_\d{3}_s$/
// number: key in the format /^add_\d{3}_n$/
add_001_s: z.string()
add_002_s: z.string()
add_003_n: z.number()
add_004_n: z.number()
CodePudding user response:
The way I would go about this is with a combination of three schemas as follows:
import { z } from "zod";
const mandatoryFields = z.object({
id: z.number(),
name: z.string()
});
const stringRegex = /^add_\d{3}_s$/;
const optionalStringFields = z.record(
z.string().regex(stringRegex),
z.string()
);
const numberRegex = /^add_\d{3}_n$/;
const optionalNumberFields = z.record(
z.string().regex(numberRegex),
z.number()
);
These three schemas make up the core of the type you want to parse out, but there isn't a great way to combine them using and
because the record types will conflict and the mandatory fields also aren't parsable as part of either record. I think it would be difficult to define a type in vanilla TypeScript for data you're receiving without a massive enumerated type.
My solution to this these three base schemas to preprocess
the input into a new object that breaks out each of the three pieces of the schema. It doesn't do any validations on what's there it just pulls those fields out to pass them into the final schema that will do the more specific validations:
const schema = z.preprocess(
(args) => {
const unknownRecord = z.record(z.string(), z.unknown()).safeParse(args);
if (!unknownRecord.success) {
// In the event that what was passed in wasn't an unknown record
// this skips the rest of the preprocessing and lets the schema
// fail with a better error message.
return args;
}
const entries = Object.entries(unknownRecord.data);
// Pulls out just stuff that looks like optional number fields
const numbers = Object.fromEntries(
entries.filter(
([k, v]): [string, unknown] | null => k.match(numberRegex) && [k, v]
)
);
// pulls out just stuff that looks like optional string fields
const strings = Object.fromEntries(
entries.filter(
([k, v]): [string, unknown] | null => k.match(stringRegex) && [k, v]
)
);
// The types here are all unknowns but now the pieces of the data
// have been grouped in a way that those three core schemas can parse them
return {
mandatory: args,
numbers,
strings
};
},
z.object({
mandatory: mandatoryFields,
numbers: optionalNumberFields,
strings: optionalStringFields
})
);
So now, if you pass in something like:
const test = schema.parse({
id: 11,
name: "steve",
add_101_s: "cat",
add_123_n: 43,
dont_care: "something"
});
console.log(test);
/* Logs:
mandatory: Object
id: 11
name: "steve"
numbers: Object
add_123_n: 43
strings: Object
add_101_s: "cat"
*/
You get individual sections back for each of the pieces. This also doesn't pass through unnecessary fields like dont_care
which is a bit of a benefit over using passthrough
to attempt to accomplish this.
I think this is probably the best option unless you want to try and come up with a massive optional mapped type for both of the things I'm currently calling records. That would potentially have better types but you'll end up with a massive file to fully enumerate the fields.