I have an S3 bucket with over 20 million objects (2.3TB).
The objects need to have their content-disposition metadata populated with user defined names while preserving their existing content-type metadata.
The file names are stored in a separate RDS database.
It looks like I would be able to use the copy command for a small number of files but with a bucket this big it doesn't really sound like a sane option.
Any help would be greatly appreciated!
CodePudding user response:
Its seems a perfect use case for S3 Batch operation. So you could create a lambda function which would conduct your changes concurrently through the S3 Batch.