Home > other >  STDOUT and STDERR to SAME file and STDERR to ANOTHER [checklist]
STDOUT and STDERR to SAME file and STDERR to ANOTHER [checklist]

Time:01-10

I'm doing an script in bash and I want to redirect the stderr and stdout to a single file (output.log) and the stderr to another one (error.log). In the terminal, it has to show only the echo commands I want from the script.

I just want to make like a checklist. Redirect stdout and stderr from the commands to a file and stderr to diferent file. Once you have this, the only thing left is checking if the command is successful or not. This is easy just checking the stderr file.

I did something similar to what I want but I have to execute the script like this:

{ . script.sh 2>&1 1>&3 | tee error.log; } > output.log 3>&1

But in each command that I want to show in the terminal, I have to add:

| tee -a /dev/tty

A minor problem with this method is that tput civis and tput cnorm to hide and show the cursor is not working

It would be nice if I can execute the script like this, but this is not required as long as tput works:

. script.sh

I'm showing an example with ping command and another one to show you what I want:

echo "Trying ping..." | tee -a /dev/tty # Show to terminal
if ! ping -c 3 google.es; then # Check if the ping is successful or not, redirecting all outputs to files
  echo "Error trying to ping" | tee -a /dev/tty # Show to terminal
else
  echo "Ping finished successfully!" | tee -a /dev/tty # Show to terminal
fi

echo "Trying {cmd}..." | tee -a /dev/tty # Show to terminal
if ! {cmd2}; then # Check if the {cmd2} is successful or not, redirecting all outputs to files
  echo "Error trying to {cmd2}..." | tee -a /dev/tty # Show to terminal
else
  echo "{cmd2} finished successfully!" | tee -a /dev/tty # Show to terminal
fi
.
.
.
.

The output would be and I want is:

Trying ping...
Ping finished successfully!
Trying {cmd2}...
Error trying to {cmd2}!
.
.
.
.

If there's another way to make that checklist I am all ears.

Thank you for your time :)

PS: I will do functions to refactor the code, don't worry about that. For example a function to check if the command is successful or not.

CodePudding user response:

A standard way would be:

 { . script.sh 2>&1 1>&3 | tee error.log; } > output.log 3>&1

To understand it you need to know that the "redirections" of a command are executed from right to left.

Let's consider what happens with cmd 2>&1 1>&3.

  1. The content of 1 is "moved" to 3.

  2. The content 2 is "moved" to 1.

  3. As there is no more redirections for cmd, the contents of 1, 2 (empty) and 3 are then "consumed" by the rest of the script.

Now, what would happen if we change the order of the redirections with cmd 1>&3 2>&1?

  1. The content of 2 is "moved" to 1

  2. The content of 1 (which also contains a copy of 2) is then "moved" to 3.

  3. As there is no more redirections for cmd, the contents of the file descriptors 1 (empty), 2 (empty) and 3 are then "consumed" by the rest of the script.

CodePudding user response:

This should do what you want:

. script.sh > >( tee output.log ) 2> >( tee error.log )

This uses process substitutions to write the output of the command to two instances of tee.

The first substitution writes stdout to output.log and copies it to the existing stdout (likely your terminal). The second substitution copies stderr to both error.log and stdout, which goes to the first tee that writes it to the output file and the terminal.

CodePudding user response:

I think this will do what you want.

$: { ls -ld . bogus 2> >( tee err.log ) ;} >all.log
$: grep . ???.log
all.log:drwxr-xr-x 1 paul 1049089 0 Jan  9 11:45 .
all.log:ls: cannot access 'bogus': No such file or directory
err.log:ls: cannot access 'bogus': No such file or directory

or to show the results as you go,

$: { ls -ld . bogus 2> >( tee err.log ) ;} | tee all.log
drwxr-xr-x 1 paul 1049089 0 Jan  9 12:01 .
ls: cannot access 'bogus': No such file or directory
$: grep . ???.log
all.log:drwxr-xr-x 1 paul 1049089 0 Jan  9 12:01 .
all.log:ls: cannot access 'bogus': No such file or directory
err.log:ls: cannot access 'bogus': No such file or directory

It's also possible to do this inside your script, though it starts getting messy...

$: cat tst
#!/bin/bash
{ { # double-group all I/O
    ls -ld . bogus
} 2> >( tee err.log ) ;} | tee all.log # split by grouping

$: ./tst
ls: cannot access 'bogus': No such file or directory
drwxr-xr-x 1 paul 1049089 0 Jan  9 12:09 .

$: grep . ???.log
all.log:ls: cannot access 'bogus': No such file or directory
all.log:drwxr-xr-x 1 paul 1049089 0 Jan  9 12:09 .
err.log:ls: cannot access 'bogus': No such file or directory

You could also do something like -

#!/bin/bash
exec > >(tee all.log)
{ # group all I/O
  ls -ld . bogus
} 2> >( tee err.log )

though at that point I think the collation of stdout (which is buffered) and stderr (which is generally not) will decouple. Logs with errors out of sync can be nightnarish to debug...

  • Related