It should be straightforward, but I could not find an example online as to what function/method to use. I am trying to upload a DataFrame object to an AWS S3 bucket in Julia. Do I need to save it first and then upload the file? I am using AWS
and AWSS3
packages. Thanks!
CodePudding user response:
Ok, I think I need to put the object into a IOBuffer
first:
b=IOBuffer()
CSV.write(b,data)
s3_put(aws, s3_bucket_name, filename, take!(b))
but it doesn't seem possible to compress it? I.e., upload a .csv.gz
file?
CodePudding user response:
(answer to comment)
You can compress in-memory like this:
using CodecZlib, TranscodingStreams
buf = IOBuffer()
stream = GzipCompressorStream(buf)
CSV.write(stream, table)
write(stream, TranscodingStreams.TOKEN_END)
flush(stream)
compressed_data = take!(buf)
close(stream)
# now put compressed_data to S3