Home > Blockchain >  How do I tell Postgres to load small tables into memory?
How do I tell Postgres to load small tables into memory?

Time:11-24

Let's say I have a small postgres database (< 500mb) , and I have app which is very read intensive 99% of requests are reads. Is there a way to tell Postgres to load all tables into RAM so it can do selects faster.? I think Oracle and SQL server have that kind of functionality.

I have done some test on my local machine, I have table with 500 records, Java HashMap Took 2ms, sql select took 12000 ms, Obviously java HashMap is faster because it is within same process but is there a way to speedup sql queries for small tables in postgres ? Thanks

 for (int i = 0; i < 100_000; i  ) {
   //1) select * from someTable where id = 10
   // 2) get from Java HashMap by key
 }

CodePudding user response:

PostgreSQL caches data in RAM automatically. All you have to do is set shared_buffers to be at least as big as your database is, then all data have to be read only once and will stay cached in RAM.

If you want to reduce the initial slowness caused by reading the data, set shared_preload_libraries = 'pg_prewarm,pg_stat_statements'.

I suspect that the excessive slowness you report is not slowness of the database, but network lag or inefficient queries. pg_stat_statements will help profile your database workload.

  • Related