OVERVIEW
I have this e-commerce app that buy some goods and such. I create the app using typescript and ORM Sequelize. So I have this example table that will insert generated invoice number sequential based on current days.
CREATE TABLE `dm_generate_trx` (
`id` int NOT NULL AUTO_INCREMENT,
`date` date NOT NULL,
`increment_value` int NOT NULL DEFAULT '0',
`flag_hit` varchar(1) CHARACTER SET utf8 DEFAULT NULL,
`created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `dm_generate_trx_UN` (`last_count`)
)
The table would look like this.
|id |date |increment_value|flag_hit|created_at |updated_at |
|---|----------|---------------|--------|-------------------|-------------------|
|118|2022-05-19|1 |N |2022-05-19 10:10:21|2022-05-19 10:10:21|
|126|2022-05-19|2 |N |2022-05-19 10:10:54|2022-05-19 10:10:54|
|134|2022-05-19|3 |N |2022-05-19 10:11:37|2022-05-19 10:11:37|
|142|2022-05-20|1 |N |2022-05-20 08:12:14|2022-05-20 08:12:14|
|150|2022-05-20|2 |N |2022-05-20 08:12:34|2022-05-20 08:12:34|
so in my typescript code I have this query before created the invoice number. So each day if the date is changed, the incremental is start to one again (just like the table up there).
const count = await dm_generate_trx.count({
where: {
createdAt: {
[Op.gt]: todayStart, // the value will be "DATE NOW 00:00:00"
[Op.lt]: now,
},
},
transaction,
});
const currentCount = count 1;
Or in SQL it would be like this
SELECT count(*) FROM dm_generate_trx
WHERE createdAt > 'DATE NOW 00:00:00' AND createdAt < 'DATE NOW HH:MM:SS'
And then I would sum with 1 and the result like this line
const currentCount = count 1;
And then I would insert it into database.
PROBLEM
I got a problem when there are two or more transaction that run at the same time, the script also run and commit at the mostly-exact same time, maybe just milliseconds different. The incremental will duplicate because the transaction when run the count query script, didn't catch the first inserted value.
What I would expect the result will be
id | date | increment_value | flag_hit | created_at | updated_at |
---|---|---|---|---|---|
118 | 2022-05-19 | 1 | N | 2022-05-19 10:10:21 | 2022-05-19 10:10:21 |
126 | 2022-05-19 | 2 | N | 2022-05-19 10:10:54 | 2022-05-19 10:10:54 |
134 | 2022-05-19 | 3 | N | 2022-05-19 10:11:37 | 2022-05-19 10:11:37 |
135 | 2022-05-19 | 4 | N | 2022-05-19 10:12:37 | 2022-05-19 10:12:37 |
142 | 2022-05-20 | 1 | N | 2022-05-20 08:12:14 | 2022-05-20 08:12:14 |
150 | 2022-05-20 | 2 | N | 2022-05-20 08:12:34 | 2022-05-20 08:12:34 |
But instead, I got the duplicate data like this.
id | date | increment_value | flag_hit | created_at | updated_at |
---|---|---|---|---|---|
118 | 2022-05-19 | 1 | N | 2022-05-19 10:10:21 | 2022-05-19 10:10:21 |
126 | 2022-05-19 | 1 | N | 2022-05-19 10:10:54 | 2022-05-19 10:10:54 |
134 | 2022-05-19 | 1 | N | 2022-05-19 10:11:37 | 2022-05-19 10:11:37 |
135 | 2022-05-19 | 2 | N | 2022-05-19 10:12:37 | 2022-05-19 10:12:37 |
142 | 2022-05-20 | 1 | N | 2022-05-20 08:12:14 | 2022-05-20 08:12:14 |
150 | 2022-05-20 | 1 | N | 2022-05-20 08:12:34 | 2022-05-20 08:12:34 |
How to solve this issue ? I have tried ISOLATION_LEVELS.READ_UNCOMMITTED
and still didnt work. Do i have to solve this using code script or database ?
CodePudding user response:
So, I think you have two options:
- Compute the new increment_value then do the insert within a single client transaction.
- Create a stored procedure, and do this within a single transaction in the database.
The READ_UNCOMMITTED
isolation level feels wrong. You wouldn't want dirty reads when figuring out count data. You want the isolation level to tell you what has already been committed.
CodePudding user response:
What you have is a race condition.
Using READ_UNCOMMITTED
doesn't solve this*, because you still have a brief moment between reading the current value, incrementing it in Typescript, and then inserting it into the table. That moment might seem so brief that it's impossible for a problem to occur, but trust me — it will.
There's an old saying regarding unlikely events: "One in a million is next Tuesday." Even if an event is extremely rare, if you repeat the conditions that could lead to the event many, many times, it'll happen sooner than you think. You should also read about the Birthday problem.
You can't solve this without some kind of locking to ensure the concurrent transactions are not in a race condition. One has to wait while the other commits its value.
Typically in MySQL this is solved with the AUTO_INCREMENT feature. Each session can insert to the same table in concurrent transactions. The table keeps track of the most recently allocated auto-increment id. The insert acquires a table lock only long enough to generate the next id, then it releases that table lock.
But the auto-increment feature doesn't support what you want, an incrementing number that restarts from 1 each day. To do that, you have to make each session lock the table, read the current max value, increment it, and finally insert the incremented value. You can do this with the LOCK TABLES statement.
There is no conventional transactional way of solving this, because you would still have a race condition.
* In fact, I have never seen a legitimate use for READ_UNCOMMITTED
.