I'm currently designing a database for a uni project that's supposed to mantain a coffee shop. Now, the Person and Reservation entities have a Many to Many relationship - A person can make multiple reservations(for different days for example), and a Reservation can be used by multiple people to occupy a table.
But, this very model can also mean a person can make multiple reservations for the same period so that a bigger group will occupy multiple tables, which I want to avoid, because he will only sit at the table he's reserved for himself and a few other people.
As long as the model fits my initial intent, I don't have to worry about potential "hazardous" data, right? Because at the end of the day, the fault will be on the data insertion that won't fit the model's initial intent?
Sorry for any confusion caused by the wording of the question
CodePudding user response:
Good question. I would think that the answer is the same one as all contractors would give you: "it depends".
Some teams consider the database as a "repository" of data. The repository only stores data and is not responsible for any level of data validation or integrity. All validations, checks and balances, and data integrity rules are performed/enforced by the applications -- emphasis in the plural form of the word here. That means that all the apps that access the database need to abide by the same rules, now and in the future (will that really happen?).
On the other hand, you have the fully-fledged "relational database" point of view that tries to enforce rules on the data by implementing "constraints". These rules ensure some level of data quality and integrity is enforced for all and any app that is trying to save data. However, this not foolproof, particularly for non-trivial rules.
Now, in the real world there's always budget limits, urgent deadlines, and a myriad of other restrictions that limit the quality of the apps teams deploy to production. This entails limited level of QA and fast time to market. In these cases, not having integrity rules can be a blessing or a curse. A blessing, because the overnight new version of the application won't fail immediately when running in production in the presence of bugs, and will store [bad] data anyway. A curse, since someone will need to deal with that bad data at some point.
In my personal opinion, I'm a fan of enforcing integrity rules at the database level as much as possible, so bad code fails fast during internal tests or during official QA. This way the database remains with a decent-to-great level of quality, and I don't get a phone call at 2 am on Sunday because hell broke loose.