I have a wrapper script for a CI pipeline which works great, but it always returns with 0 even though subcommands in a for loop fails. Here is an example:
#!/bin/bash
file_list=("file1 file2 file_nonexistant file3")
for file in $file_list
do
cat $file
done
>./listfiles.sh
file1 contents
file2 contents
cat: file_nonexistant: No such file or directory
file3 contents
>echo $?
>0
Since the last iteration of the loop is successfull the entire script exits with 0. What i want is for the loop to continue on fail and for the script to exit 1 if any of the loop iterations returned errors.
What i have tried so far:
set -e
but it halts the loop and exits when an iteration fails- replaced
done
withdone || exit 1
- no effect - replaced
cat $file
withcat $file || continue
- no effect
CodePudding user response:
Alternative 1
#!/bin/bash
for i in `seq 1 6`; do
if test $i == 4; then
z=1
fi
done
if [[ $z == 1 ]]; then
exit 1
fi
With files
#!/bin/bash
touch ab c d e
for i in a b c d e; do
cat $i
if [[ $? -ne 0 ]]; then
fail=1
fi
done
if [[ $fail == 1 ]]; then
exit 1
fi
The special parameter $?
holds the exit value of the last command. A value above 0 represents a failure. So just store that in a variable and check for it after the loop.
The $? parameter actually holds the exit status of the previous pipeline, if present. If the command is killed with a signal then the value of $? will be 128 SIGNAL. For example 128 2 in case of SIGINT (ctrl c).
Overkill solution with trap
#!/bin/bash
trap ' echo X $FAIL; [[ $FAIL -eq 1 ]] && exit 22 ' EXIT
touch ab c d e
for i in c d e a b; do
cat $i || export FAIL=1
echo F $FAIL
done