I have a file that contains lines of the form:
( 1) 0 sec 730 usec
( 2) 0 sec 1 usec
( 3) 0 sec 1 usec
.
.
.
(998) 0 sec 1 usec
(999) 0 sec 0 usec
I would like to only display lines which contains more than 100 usec
I tried to use xargs but my attempts failed. I also tried to write a bash script with a while loop, storing my file inside an ARG variable. But I don't know how to make my while loop parse through every single lines of ARG...
How to do that in both ways please ? Thanks
CodePudding user response:
To run a test on every line of a file in Bash, you can use a loop and the read command. Here's an example of how you can do this:
# Set the input file
input_file="input.txt"
# Open the file and read each line
while IFS= read -r line
do
# Run a test on the line
if [ "$line" == "hello" ]; then
echo "Match found"
fi
done < "$input_file"
This will read each line of the input_file and run the test (in this case, checking if the line is equal to "hello") on each line. If the test is successful, it will print "Match found".
You can modify the test to fit your needs and use any valid Bash command or script inside the loop.
Keep in mind that this approach will read the entire file into memory, so it may not be suitable for very large files. In that case, you may want to consider using sed or awk to process the file line by line.
CodePudding user response:
Using awk
Match lines with 5 fields if the 4th field is greater than or equal to 100 or match lines with 6 fields if the 5th field is greater than or equal to 100.
awk '(NF==5 && $4>=100) || (NF==6 && $5>=100) {print $0}' src.dat
( 1) 0 sec 730 usec
src.dat contents:
( 1) 0 sec 730 usec
( 2) 0 sec 1 usec
( 3) 0 sec 1 usec
.
.
.
(998) 0 sec 1 usec
(999) 0 sec 0 usec
CodePudding user response:
Using awk
awk '$(NF-1) > 100' file.txt
and counting backward to avoid dealing with the extra spaces in the first field.