Home > Software engineering >  Is there a limit to the number of transactions that can be queued up for a row lock in MySQL/InnoDB?
Is there a limit to the number of transactions that can be queued up for a row lock in MySQL/InnoDB?

Time:05-05

I'm working on an application that sees thousands of basically simultaneous login attempts. These login attempts depend on ADFS metadata that has to be refreshed from time to time for different groups of users. While I am building an automatic refresher when it gets to T-12hours until the refresh is required, I also want to handle the situation when the automatic refresh fails to occur. What I want is for the first login attempt that fails due to out-of-date metadata to trigger a refresh, but only the first, otherwise we'll get thousands of unnecessary requests to an ADFS server.

Since the ADFS metadata is stored in a MySQL table anyway, I thought using the InnoDB locking mechanism to handle this. If a login request fails due to out-of-date metadata, it will request a lock on the row holding the relevant metadata. If the lock is granted, it will check the refresh date on the metadata, and if that is out-of-date, it will trigger a refresh of the metadata and then write the new metadata to that row.

All subsequent logins that fail due to old metadata will also request their own locks, which will not be granted because the first request was granted a lock. As soon as the first request finishes updating the metadata, it will release the lock, and the next lock will be granted. That request will check the refresh date, see that it does not need to be refreshed, and continue as normal with the new metadata.

My question is, can MySQL/InnoDB handle, say, 10,000 transactions waiting for a lock on a single row? If there is a limit, can the limit be changed?

CodePudding user response:

You can have at most one transaction active per user thread, therefore only max_connections transactions can exist at a given moment. Each transaction is single-threaded, so it can run only one SQL query at a time. This places an upper limit on the number of lock-waits.

Granted, each SQL query might be waiting for many locks. So there's a bit of memory resource needed to keep track of each lock, and this is a finite resource. InnoDB uses a portion of the buffer pool for a data structure it calls the "lock table." This is not a table you can access using SQL, but it's using the same internal API used for storing tables.

https://www.percona.com/blog/2006/07/13/how-much-memory-innodb-locks-really-take/ says:

So we locked over 100K rows using about 44KB. This is still quite efficient using less than 4 bits per locked row.

https://blog.pythian.com/innodb-error-total-number-of-locks-exceeds-the-lock-table-size/ says:

The locks table size for the 677 million rows was 327M according to the InnoDB status.

Speaking for my own experience, I have never had the occasion to get an error about this, but it is possible if you have a small buffer pool size.

I would say if you have that much locking queueing up, you will probably notice other problems before InnoDB runs out of memory for the lock table. For example, all your clients will seem to be waiting for queries which cannot proceed.

CodePudding user response:

Consider using GET_LOCK() instead.

  • Related