I have a write-heavy table in my PostgreSQL database. For every read, there are about 100k writes. This table includes multiple (5-6) indices which is killing performance because these indices are updated on every write.
Is there anyway to optimize performance of a write-heavy table with indices? I've come up with a few options:
- Somehow disable index updates on every write and only run them on a timed interval (once every few hours would be fine for my use case).
- Delete the indices. Create a materialized view with the indices and refresh this MV every few hours. All reads will target the MV, not the primary table.
- Delete the indices. Take the performance hit on the reads.
- Anything else I'm not thinking of?
CodePudding user response:
Your second and third point sound like good options. If you have few reads, you shouldn't need that many indexes. Consider partitioning: it can eliminate the need for one index, and it makes getting rid of old data easy.
CodePudding user response:
If your writes are all (or mostly) INSERT
(not UPDATE
) statements, and you can describe your data as "time-series data" - i.e. data about events that happen over time, you would probably benefit from the TimescaleDB PostgreSQL extension.
Timescale uses a concept they call "hypertables" which transparently breaks up your data into smaller chunks based on time (and other factors if you set them up). Performance can be significantly better than plain PostgreSQL while still supporting all PostgreSQL features.
I am not associated with TimescaleDB; I'm just a satisfied user.