Home > database >  Hibernate reset sequence cache
Hibernate reset sequence cache

Time:01-19

Example:

  • suppose that entity E has id generated by sequence e_seq
  • suppose that value of sequence is initially 0 on the database, and increment is configured to be 50
  • when hibernate starts, it gets the next value of the sequence (i.e. 0 50=50) and keeps an internal cache of the available values (i.e. those in the interval 0-50)
  • as long as the cache has available values, no further requests to the dbms are performed to get next value of sequence
  • only after you create 50 instances of entity E the 50 ids are consumed and hibernate asks the next value to the dbms.
  • suppose that the hibernate cache has still 50 ids available
  • suppose that a low-level procedure (like data migrations) inserts let's say 100 entities of type E in the database using SQL statements (not using hibernate APIs), with ids from 1 to 100 and then resets the sequence value to 100
  • if application tries to create a new entity from its APIs, it will use an id taken from the hibernate cache but which has already being used by the low-level procedure, hence causing a duplicate id exception

I need therefore to find a way to tell Hibernate to "reset its ids cache", or in other words "force hibernate to contact again the dbms to get the current sequence value".

CodePudding user response:

a low-level procedure [...] inserts let's say 100 entities [...] with ids from 1 to 100

Why is that low-level procedure generating the IDs on its own? Why is it NOT using the sequence?

The whole point of Hibernate's pooled and pooled-lo ID generating mechanisms, which you appear to be using (and definitely should, if you're not), is to be able to safely cache IDs even on the face of any external processes making use of the sequence outside of Hibernate's control.

If that external process used the sequence too, your problem would disappear, since none of Hibernate's cached values would get used; and the next batch of cached values would start from whatever sequence value was last generated by the external process, avoiding conflicts:

  1. Hibernate caches values 0-49. sequence.NEXTVAL would be 50.
  2. External process inserts 100 rows. sequence.NEXTVAL would be 5050.
  3. Hibernate ends up using all cached values, and asks for the next sequence value.
  4. Hibernate caches values 5050-5099. sequence.NEXTVAL would be 5100.
  5. Etc.

The solution to your issue, assuming you're using Hibernate's pooled(-lo) ID strategy, is not to disable or reset Hibernate's cache and hinder your application performance; the solution is to make any external processes use NEXTVAL() too to generate the appropriate IDs for the entities when inserting data into that table, instead of providing their own values.


Concerns:

"But then I would end up with gaps in my IDs!"
So what?
There's no problem whatsoever in your ID column having gaps. Your goal here is avoiding ID conflicts and ensuring that your application does not make 2 trips to DB (one for the sequence, one for the actual insert) every time you create an entity. If not having a neat, perfectly sequential set of IDs is the price to pay for that, so be it! Quite a deal, if you ask me ;)

"But then entities that were created later using Hibernate's cached values would have a lower ID than those created by the external process before!"
So what?
The primary goal of having an ID column is to be able to uniquely identify a row via a single value. Discerning order of creation should not be a factor in how you manage your ID values; a timestamp column is better suited for that.

"But the ID value would grow up too fast! I just inserted 50 rows and it's already by the thousands! I'll run out of numbers!"
Ok, legitimate concern here. But if you're using sequences, chances are you're using either Oracle or PostgreSQL, maybe SQL Server. Am I right?
Well, PostgreSQL's MAXVALUE for a bigint sequence is 9223372036854775807. Same goes for SQL Server. If your process inserted a new row each millisecond non-stop, it would still take it 5 million years to reach the end of the sequence. Oracle's MAXVALUE for a sequence is 999999999999999999999999999, several orders of magnitude greater than that.
So... As long as the datatype of your ID column and sequence is aptly chosen, you're safe on that regard.

CodePudding user response:

Have you tried to clear the current session and create a new one? This forces Hibernate to re-query the database for the current sequence value.

In other words you can use the method Session.flush() and Session.clear():

Session session = sessionFactory.openSession();
Transaction transaction = session.beginTransaction();

// Perform some operations that use the id cache

session.flush();
session.clear();

// Perform some more operations that use the id cache

transaction.commit();
session.close();

Or you could use EntityManager.refresh() which will refresh the state of the instance from the database, and in the process, update the internal cache with the current sequence value:

EntityManager em = entityManagerFactory.createEntityManager();
em.getTransaction().begin();

// Perform some operations that use the id cache

em.refresh(entity);

// Perform some more operations that use the id cache

em.getTransaction().commit();
em.close();
  • Related