OK, first thing first; I know S3 doesn't have "directories" and they are all objects that happen to have the same Key prefix with slashes in the name... thus, no need to comment or answer that S3 doesn't have directories!
Although, a lot of software is "detecting" Keys with slashes in them as directories and one such is the SFTP Server software we use. We host a SFTP server that is using AWS S3 as "file system" where there is a Bucket with "directories" and each directory in the root of the Bucket is a SFTP users home directory.
The problem is that AWS is removing empty "directories" (zero byte objects that ends with a slash) so if a SFTP user deletes all their files (or they have not yet received anything) AWS removes the "directory" and then there is an error for the user when they try to connect to our SFTP as the directory is gone.
One obvious solution is to add a "blocker" file, e.g. readme.txt
or similar to the "directory" so there is something there but most of the clients seems to remove these as well... There are some other things that might "lock" (Object lock for example) the file from being deleted but this in turn causes errors for the SFTP client if they try and delete all files and we'd like to avoid throwing errors (and show unnecessary files)...
So, any way to prevent AWS S3 from removing the empty "directory"?
CodePudding user response:
if you create the directory through s3api
(rather than s3
) - it won't get deleted.
For example,
aws s3api put-object --bucket mybucket --key home/user1/
then you can copy into and delete files from this directory
I don't know how SFTP works, but that's what happens if you click "Create Folder" button in AWS console or in WinSCP. I would think SFTP has something similar.