i've got assigned a task where i have to do a batch process when every half-hour this call an API for some data and this data must be stored in a s3 bucket in a different file.
i already did it by copying the data in a BufferedOutputStream
then uploading to s3 with uploadPart
from the sdk-java (v2) in a class extending FlatFileItemWriter<T>
.
but i wonder if is there another way to write to an already existing file in a bucket with a chunk process with Spring Batch? at least to reduce the size of processing per call since the data i get from the API is growing quite fast as the original API response is getting large in size (like 1000 and more) per request.
This will be streamed in a future, but since we are in migration i need a temporary solution while the API i call gets migrated
CodePudding user response:
According to https://forums.aws.amazon.com/message.jspa?messageID=540395, there is no way to append to an s3 file.
Based on that, you would need to generate a new file and upload it again to replace the previous one.