I want to upload a CSV file to a given s3 bucket, this CSV file is transfered from a dataframe by using the df.csv(path)
. For now, I saved the file locally, I'm wondering if there is a way to upload that file to a S3 bucket if given a s3 bucket name?
CodePudding user response:
Something like this should work
dataframe
.write
.option("header","true")
.csv("s3a://bucket/path/file.csv")
More examples on https://sparkbyexamples.com/spark/write-read-csv-file-from-s3-into-dataframe/