Home > database >  Why Won't Heroku Postgres Cache Hit Rate Go Up?
Why Won't Heroku Postgres Cache Hit Rate Go Up?

Time:09-21

I am migrating a database to heroku. I am using pg:diagnose to try and ensure that the database will be running smoothly.

Yesterday I noted that my "overall cache hit rate" was around 94%, which is lower than the recommended 99%. My database was running on the "Premium 3" tier which has 15 GB of RAM. So I decided to upgrade to a plan with more RAM, hoping this would lead to a higher cache hit rate. I switched to "Standard 4", which has more than double the RAM. The cache hit rate was low at first, but that was because it was cold.

But now it's the next day, the cache is warm, and my "overall cache hit rate" is back to 94%, right where it started! I must have missed something - I doubled the RAM but I'm not getting any more cache hits?

I would consider upgrading to a yet higher plan, but upgrading plans doesn't seem to help. My data size is 38.9 GB, and my current plan has 30.5 GB of RAM.

Thanks in advance to anyone who can help me understand what's going on here!

CodePudding user response:

The cache-hit rate you are looking at from pg:diagnose seems to be measured about the same way that PostgreSQL itself would derive it--it considers everything found in shared_buffers to be a hit, and every thing else to be a miss. But for the misses, many of them could also be found in memory, it would just be the kernels filecache memory, not PostgreSQL's shared_buffers. From a performance perspective, this should also be hits, but there is no mechanism to count them as such.

I don't know how heroku manages shared_buffers. If shared_buffers stayed the same when you increased the instance size, then you would expect the reported hit rate to also stay the same, even if the true hit rate increased (i.e. more of the buffer misses are being served out of filecache rather truly being read from disk).

  • Related