I've successfully managed to transfer a tar file over SSH on stdout from a remote system, creating a compressed file locally, by doing something like this:
read -s sudopass
ssh me@remote "echo $sudopass | sudo -S tar cf - '/dir'" 2>/dev/null | XZ_OPT='-6 -T0 -v' xz > dir.tar.xz
As expected this gets me a dir.tar.xz
locally which is all of the remote /dir
compressed.
I've also managed to figure out how to locally only compress a subset of files, by passing a filelist to tar
with -T
on STDIN:
find '/dir' -name '*.log' | XZ_OPT='-6 -T0 -v' tar cJvf /root/logs.txz -T -
My main question is: how would I go about doing the first thing (transfer plain tar remotly, then compress locally) while at the same time telling tar
that I only want to do it on a specific subset of files?
When I try combining the two:
ssh me@remote "echo $sudopass | sudo -S find '/dir' -name '*.log' | tar cf
-T -" | XZ_OPT='-6 -T0 -v' xz > cypress_logs.tar.xz
I get errors like:
tar: -: Cannot stat: No such file or directory
I feel like tar
isn't liking the fact that I'm both passing it something on STDIN as well as expecting it to output to STDOUT. Adding another -
didn't seem to help either.
Also, as a bonus question, if anyone has a better idea on how to pass $sudopass
above that would be great, since this method -- while avoiding having the password in the bash history -- makes the sudo password show up in the process list while it's running.
CodePudding user response:
Remember that the f
option requires an argument, so when you write cf -T -
, I suspect that the -T
is getting consumed as the argument to f
, which throws off the rest of the command line.
This works for me:
ssh me@remote "echo $password | sudo -S find /tmp/dir -name '*.log' | tar -cf- -T-"
You could also write it like this:
ssh me@remote "echo $password | sudo -S find /tmp/dir -name '*.log' | tar cf - -T-"
But I prefer to always use -
for options, rather than legacy tar's weird options without any prefix.