I have written a script that will check the contents of $path
and print the i 1
value in the file mentioned in the $path
.
#! /bin/bash
echo "Enter number of records"
read num
count=1
while [ $count -le $num ]
do
echo "Enter path"
read path
var2=`echo "${path##*/}"`
var3=`awk '{for(i=1;I<NF;i ) if ($i == "'${var2}'"} print $(i 1)}' ${path} | head -1`
echo "done,$var3" >> result.csv
((count ))
done
If the value of $path
was /c/training/sample.sh
or /c/training/textfile
, with the below content
sample.sh
#!/bin/bash
#sample.sh 120
<psuedo-code>
textfile.txt
textfile.txt 0
This is random text
result.csv or how the output csv file should look like
done,120
done,0
So instead of reading the path each time, how can I read all the paths, if they are stored in a separated csv file.
Sampleinput.csv
/c/training/sample.sh,User1
/c/training/textfile.txt,USer2
How can I implement the awk
mentioned above so that it will read each value in field 1 of the Sampleinput.csv and do the same thing
CodePudding user response:
A much more versatile and usable arrangement is to pass the paths as parameters to the script.
#!/bin/sh
awk 'FNR == 1 { path=FILENAME; sub(/.*\//, "", path) }
$0 ~ path && / [0-9] $/ { print path "," 0 $NF; nextfile }' "$@"
The nextfile
statement is included in POSIX but might not be supported if you have a really old Awk or are using a non-POSIX system.
Usage:
scriptname /c/training/sample.sh /c/training/textfile >>result.csv
CodePudding user response:
I think this is what you're trying to do (untested) using GNU awk for gensub(), ARGIND, nextfile, and ENDFILE:
#!/usr/bin/env bash
IFS= read -p 'Enter number of records: ' -r num
awk -v maxFiles="$num" '
BEGIN { OFS="," }
ARGIND == 1 {
if ( ARGC < maxFiles ) {
ARGV[ARGC ] = gensub(/,.*/,"",1)
}
next
}
FNR == 1 {
fname = gensub(".*/","",1,FILENAME)
}
{
for (i=1; i<=NF; i ) {
if ( gensub(/^# /,"",1,$i) == fname ) {
val = $(i 1)
nextfile
}
}
}
ENDFILE {
if ( ARGIND > 1 ) {
print "done", val
}
val = 0
}
' Sampleinput.csv > result.csv