Home > OS >  Is it possible to increase the S3 limits for read and write?
Is it possible to increase the S3 limits for read and write?

Time:02-22

Is it possible to increase the S3 limits for read and write per second? I only found the current values in the documentation, but no indication that this limit can be increased. Does anyone know?

CodePudding user response:

A good way to improve those limits is leverage the usage of partitions. Following the documentation, those limits are applied per prefix inside your bucket, thus, the way you store your objects affects the maximum performance. I will give you one example, suppose you use the bucket to store log files. One way you could store is putting everything in the root path.

Ex

2022_02_11_log_a.txt
2022_02_11_log_b.txt
2022_02_11_log_c.txt
2022_02_11_log_d.txt
2022_02_12_log_a.txt
2022_02_12_log_b.txt
2022_02_12_log_c.txt
2022_02_12_log_d.txt

To S3, those objects lives inside the same partition, thus, they will have the maximum throughput defined in the documentation. To improve those limits, you could change the path to the following:

2022_02_11/log_a.txt
2022_02_11/log_b.txt
2022_02_11/log_c.txt
2022_02_11/log_d.txt
2022_02_12/log_a.txt
2022_02_12/log_b.txt
2022_02_12/log_c.txt
2022_02_12/log_d.txt

Now you have two partitions: 2022_02_11 and 2022_02_12. Each one with its own throughput limits.

You should check the access pattern of your files and define partitions that leverage it. If you access pattern is random, you could try to use some hash patterns as part of your object's path.

I will also leave this official documentation about object key naming

  • Related