Home > Blockchain >  Failure not capturing in Unix command when redirecting the java/Spark logs
Failure not capturing in Unix command when redirecting the java/Spark logs

Time:02-03

I am trying to pass the Java/Spark processing logs to log_msg function and in the next line I am having result=&? where failure is not capturing of Java or Spark processing in Unix.

    log_msg () {
  if [ -n "${1}" ]; then
    echo ${1}
  else
    while read IN
    do 
      echo $IN
    done
  fi
}
 failure () {
    echo ${1}
}

log_msg "Program Begins"
sh deltaspark.sh |& log_msg
result=$?
if [ "$result" -ne 0 ]; then
   failure "There was an error"
fi

But when I try to write Spark/Java processing log using redirection >> spark.log 2>&1 then result=$? captures 0 for success and failure for 1. From the logic above, after passing processing logs to function log_msg , the result=&? is not giving 1 for failure case.

How we do need to handle this case in Unix

CodePudding user response:

The exit status of a pipe is the status of the last command which is the function log_msg.

As an alternative to using bash's features pipefail and PIPESTATUS as proposed in a comment, you could replace the pipe with redirection

sh deltaspark.sh > >( log_msg ) 2>&1

or

sh deltaspark.sh >& >( log_msg )
  • Related