I have two transactions.
In first I select an entity, do validations, upload provided by client file to S3 and then update this entity with info about S3 file.
Second transaction is simply deleting this entity.
Now, assume that someone called first transaction and immediately second. Second one will proceed faster and first one will throw DbUpdateConcurrencyException
, as selected entity no longer exists on update query.
I get DbUpdateConcurrencyException
, when my transaction has IsolationLevel.ReadCommited
. But if I set IsolationLevel.Serializable
it throws InvalidOperationException
with 40001 postgres code. Could someone explain why do I get different errors, because it seems to me that outcome should be the same, as both errors invoked by updating non-existing entity?
CodePudding user response:
The 40001
error corresponds to the SQLSTATE serialization_failure
(see the table of error codes).
It's generated by the database engine in serializable isolation level when it detects that there are concurrent transactions and this transaction may have produced a result that could not have been obtained if the concurrent transactions had been run serially.
When using IsolationLevel.ReadCommited
, it's impossible to obtain this error, because choosing this level of isolation precisely means that the client-side doesn't want to have these isolation checks being done by the database.
On the other hand, the DbUpdateConcurrencyException
is probably not generated by the database engine. It's generated by the entity framework. The database itself is fine with an UPDATE updating zero row, it's not an error at the SQL level.
I think you get the serialization failure if the database errors out first, and the DbUpdateConcurrencyException error if the database doesn't error out, but the second layer in the order of layering (the EF) does.
The typical way to deal with serialization failures, at the serializable isolation level, is for the client-side to retry the transaction when it gets a 40001
error. The retried transaction will have a fresh view of the data and hopefully will pass (otherwise, loop on retrying).
The typical way to deal with concurrency at lesser isolation levels like Read Committed
it to explicitly lock objets before accessing them to force the serialization of concurrent transactions.