I have elasticsearch installed on k8s, where fluentd put data with logs everyday and average size of indice is about 1GB, I removed indices older than one week yesterday manually ( I don't have any delete policy) because storage is approaching to maximum ( I have 100 GB storage per node) however, from some reason, one folder is still almost 70GB, do you know why?
[elasticsearch@elasticsearch-master-1 indices]$ c^C
[elasticsearch@elasticsearch-master-1 indices]$ du -hs * | sort -hh
48K 7GFwGZHoQ4uM8KCpLc-Oxg
48K W0A74hl-Qw-2dIpx65hXFA
60K BNKGjyyZRheQOKpF4jPRmw
64K 2oGMlHa9TOC5IAtG4E8N2A
64K 5SidHjfxS-yvghDJL2y7Ug
64K NpCZAg2_R0SItwTY4n7cKA
88K leZHbZpEQW6A6xWCgVZTJQ
116K lA7py6UYQpS6kMBAcD-ShQ
216K pXqc9yQYSfCnSyi8Zf3qCA
432K J0zWpVVoQr6AUZwzMDObtA
5.3M gJ-_TDd9Q4KZAqjrSnz3RA
41M sMjw45GHSgah5a0c7PH1oA
378M nR-A9ZnxQdGhnNP0Snadhg
451M bCEnvyh4RuKHMD2H0EbS_w
476M 4EhSvr21QROhgVjY8yExIg
502M p7o2nSSjQiyFDQr4S5XECQ
504M U98Y4gbqS5mBnQeu-uXZyg
530M ciE5Uy6wQ1272mECz0VBgg
553M qydbdCS4SpuI9ISA-LRisg
67G hJZCqt9OQ8yRsRBlLCiPYg
[elasticsearch@elasticsearch-master-1 indices]$ pwd
/usr/share/elasticsearch/data/nodes/0/indices
[elasticsearch@elasticsearch-master-1 indices]$ ```
CodePudding user response:
after checking the stats you posted it seems the cuplrit is the fluentd-2021-11-17
index.
{
"fluentd-2021-11-17": {
"uuid": "hJZCqt9OQ8yRsRBlLCiPYg",
"primaries": {
"store": {
"size": "66.5gb",
"size_in_bytes": 71466871581,
"total_data_set_size": "66.5gb",
"total_data_set_size_in_bytes": 71466871581,
"reserved": "0b",
"reserved_in_bytes": 0
}
},
"total": {
"store": {
"size": "133.2gb",
"size_in_bytes": 143041475223,
"total_data_set_size": "133.2gb",
"total_data_set_size_in_bytes": 143041475223,
"reserved": "0b",
"reserved_in_bytes": 0
}
}
}
}
You certainly had an incident this day that created much more logs than usual.