I wrote bash script to export logs from my linux machine to S3. What I want is to add the ip address of the instance to them name of .gz file I am moving and I don't know how to do it. Here is my script that didn't work:
#!/bin/bash
#taking out ip address of the instance
ip4=$(/sbin/ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1)
find /home/user/tst/logs -name '*.gz' -exec bash -c '
for item do
aws s3 cp "${item/ip4}" s3://s3backups/tst/
if [[ $? -eq 0 ]]; then
sudo rm $item
fi
done
' bash {}
If I put "$item" instead of "${item/ip4}", it moves the file but I need name changed and added the ip address there. Any ideas?
CodePudding user response:
You can first upload the files to S3 bucket with aws s3 cp
and then iterate over the list of files and use the aws s3 mv
command to rename them
CodePudding user response:
I only use -exec
in find
for simple commands. Whij not use a "normal" loop?
When the folders are already created in S3, you can use
#!/bin/bash
#taking out ip address of the instance
ip4=$(/sbin/ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1)
while IFS= read -r filepath; do
filename="${filepath##*/}"
echo "Processing ${filename}"
aws s3 cp "${filepath}" "s3://s3backups/tst/${ip4}/${filename}"
if [[ $? -eq 0 ]]; then
sudo rm "${filename}"
fi
done < <(find /home/user/tst/logs -maxdepth 1 -type f)