I am developing inventory management software. The database uses RLS to allow multiple clients using the same database. As such every table has a "Client ID" column. When inventory is scanned, it updates the numbers for "active", "reserved" or "incoming" states.
Active = ready to use, Reserved = temporarily reserved as an order is potentially incoming (but may be cancelled) and Incoming = ordered stock
These states are adjusted at various points - sales, good received, stock count updates etc. I want to know how best to avoid data inconsistencies. What I want to do is basically tell postgres "lock this table for client X with rows matching SKU XXX while I run this entire transaction" but I can't figure out how to do it.
CodePudding user response:
It sounds like you need a table of customer/SKU combinations, and then everyone who does the things that need to be serialized must lock the relevant row in that table first. Or if you don't want to permanently enumerate all possible client/SKU combinations, you could do it on the fly:
create table customer_sku_lock (customer_id int, sku text, primary key (customer_id, sku));
Then
BEGIN;
insert into customer_sku_lock values (4, 'DEADBEEF'); --get lock
delete from customer_sku_lock where customer_id=4 and sku='DEADBEEF'; --set up to release lock upon commit
-- do stuff here
COMMIT;
It might seem counterintuitive to do the DELETE right after the INSERT, but it does work to serialize things and that way the DELETE is less likely to get forgotten about. For extra robustness, you could even package the INSERT and DELETE up into one function call, and make the function definer's rights, and make the lock table only accessible to the functions owner.