Home > Enterprise >  Postgres JDBC driver adaptive fetching running out of memory
Postgres JDBC driver adaptive fetching running out of memory

Time:01-22

I'm trying to use the Postgres JDBC driver to query data from a table where each rows can be up to about 50MB. Unfortunately, without any memory restrictions, the Postgres driver can use too much memory and case OOMs (even with a very healthy Xmx) because it buffers so much data locally.

I've tried to restrict the driver to using less memory, for example 1GB, and telling it to buffer less too. Since no one row is bigger than 50MB this should work fine, but unfortunately I'm now getting Exceptions thrown from the Postgres driver itself. The exceptions are because it is trying to allocate more memory than I have configured it with.

If I use this configuration:

"jdbc:postgresql://localhost/dbname?maxResultBuffer=1G&adaptiveFetch=true&adaptiveFetchMaximum=2&defaultRowFetchSize=1"

I'll get an Exception thrown here, in PGStream

      if (resultBufferByteCount > maxResultBuffer) {
        throw new PSQLException(GT.tr(
          "Result set exceeded maxResultBuffer limit. Received:  {0}; Current limit: {1}",
          String.valueOf(resultBufferByteCount), String.valueOf(maxResultBuffer)),PSQLState.COMMUNICATION_ERROR);
      }

If I set a breakpoint there I can see:

value = 41155480
resultBufferByteCount = 1021091718
maxResultBuffer = 1000000000

Which shows it's picking up the config fine. I've also inspected it to make sure it's getting the fetch size config and it is.

Is there some other config I'm missing? Clearly the Postgres driver is reading more rows than I've allowed it to.

thanks

(postgreqsl 42.5.1, java 17.0.5, hikaricp 5.0.1 with max connections of 1)

CodePudding user response:

The adaptive buffer, like setFetchSize, only works if autocommit is off. If Autocommit is on, then they are silently ignored. I don't know if there is a way to turn autocommit off though the jdbc connect string, I haven't found one.

CodePudding user response:

It seems that the issue is that the Postgres driver is exceeding the maximum result buffer size that you have configured. This means that the driver is reading more rows than you have allowed it to.

One potential solution could be to increase the maximum result buffer size to a higher value. However, this may not be a sustainable solution if you are dealing with very large tables.

Another solution could be to use a different approach to querying the data, such as pagination or using a cursor. This way, you can retrieve data in smaller chunks, rather than trying to retrieve all the data at once.

Additionally, you can try to optimize your query by using appropriate indexes and limiting the number of columns that are returned. This can reduce the amount of data that needs to be buffered by the driver.

You can also try to use a different JDBC driver that may have better memory management capabilities.

It's important to note that it's not possible to give you a definitive solution without more details about your use case, but with the above solutions you can try to optimize your query and manage the memory usage of your application.

  • Related