I understood that the hot-warm(-cold-frozen-deleted) lifecycle is a great tool, but I haven't found much numerical documentation: one of the few documents that gives examples with numbers (and not just feature descriptions) is this blogpost. In the hot-warm example without roll-up, it seems to me that the main storage optimization is given by the number of replicas:
- one day of data = 86.4 GB
- 7 hot days = one day of data * 7 days * 2 replicas = 1.2 TB
- 30-7 warm days = one day of data * 23 days * 1 replica = 1.98 TB
There are other resources like this webinar, yet it doesn't distinguish between storage usage and RAM usage. Is there an official document (or third parties experiment/report) that shows if and how much the cold/frozen/"non-searchable snapshot after deletion" phases optimize the storage usage? Or is only about less RAM usage?
CodePudding user response:
There can't be a single "benchmark" here since ILM is just a tool that allows tuning your hardware configuration according to your data usage patterns.
For example, suppose you have heavy indexing and heavy searching across all of your data. In that case, you don't want to reduce your replica count for the old data, and the gain would be primarily due to slightly cheaper "warm" SSD storage. So the difference here would be minimal or none at all if the separation overhead compensates that gain.
An opposite example would be storing logs for compliance purposes (lots of writes but minimal reads, and it's primarily last 24 hrs) - then you probably want to move everything beyond a week or so into the "frozen" tier which uses s3 buckets for storage and is very cheap. Also, those shards don't count towards cluster shard count regarding heap usage and stability. In this case, tiered storage might turn out to be orders of magnitude cheaper than a single-tier cluster.