This script redirects the output of a hexdump to a text file and then loops through it printing out rows. There are exactly 374371 rows of data in the file that look like this:
1a 03 1a 03 4a 03 57 03
4b 03 44 03 1e 03 09 04
10 03 19 03 40 03 ae 03
1e 03 26 03 33 03 ad 03
10 03 84 03 43 03 62 03
Here is the script:
if [ ! -d ./temp ]; then
mkdir ./temp
mount -t tmpfs -o size=128m tmpfs ./temp
if [[ -s ./temp/samples.txt || ! -f ./temp/samples.txt ]]; then
hexdump -e '8/1 "x " "\n"' samples.bin > ./temp/samples.txt
echo " ch0 ch1 ch2 ch3"
x=0
while read line;do
echo "Sample "$x": "$(echo $line | awk '{print "0x"$1$2" 0x"$3$4" 0x"$5$6" 0x"$7$8}')
x=$((x 1))
done<./temp/samples.txt
fi
umount ./temp
rm -Rf ./temp
fi
The output looks like:
ch0 ch1 ch2 ch3
Sample 0: 0x1a03 0x1a03 0x4a03 0x5703
Sample 1: 0x4b03 0x4403 0x1e03 0x0904
Sample 2: 0x1003 0x1903 0x4003 0xae03
Sample 3: 0x1e03 0x2603 0x3303 0xad03
Sample 4: 0x1003 0x8403 0x4303 0x6203
Sample 5: 0xe003 0x1603 0x3403 0xc403
Sample 6: 0xf802 0x3b03 0x5303 0x6103
Sample 7: 0x1003 0x1503 0x4203 0x5803
Sample 8: 0x2303 0x1f03 0x5703 0x6203
Sample 9: 0x1703 0x7303 0x3103 0x3303
Sample 10: 0x1403 0xff02 0x3003 0x5103
Sample 11: 0x5f03 0x4203 0x4703 0x7e03
Sample 12: 0xba03 0x2603 0x3503 0xa003
Even with tmpfs, the script doesn't run any faster than just reading from disk. Is there a way to make this script run any faster?
CodePudding user response:
If you want to speed-up the script, consider to use awk
instead of bash
as Shellter comments. Besides you may don't need to use hexdump
. Just concatenate every two nibbles. Then would you please try:
awk '
BEGIN {
print " ch0 ch1 ch2 ch3" # print the header line
}
{
printf("Sample %d:", NR - 1) # print the sample number
printf(substr(" ", 1, 8 - length(NR - 1))) # adjust the length of the spaces
for (i = 1; i <= NF; i =2) { # print every two nibbles
j = i 1
printf("0x%s%s%s", $i, $j, j == NF ? ORS : OFS)
}
}
' samples.txt
Output with the provided file:
ch0 ch1 ch2 ch3
Sample 0: 0x1a03 0x1a03 0x4a03 0x5703
Sample 1: 0x4b03 0x4403 0x1e03 0x0904
Sample 2: 0x1003 0x1903 0x4003 0xae03
Sample 3: 0x1e03 0x2603 0x3303 0xad03
Sample 4: 0x1003 0x8403 0x4303 0x6203
CodePudding user response:
With your shown samples and attempts please try following awk
code. Written and tested in GNU awk
. Reading Input_file 2 times but first time it reads only very first line to get TOTAL number of fields and moves to the 2nd time its run.
awk '
BEGIN{
print"\tch0 ch1 ch2 ch3"
}
FNR==NR{
totalCount=NF
nextfile
}
RT{
gsub(/^[[:space:]]|\n $/,"",RT)
sub(/ /,"",RT)
val=(val?val OFS:"") "0x1" RT
if( count%(totalCount/2)==0){
print "Sample " count2":\t"val
val=count=""
}
}
' Input_file RS="(^|\n| )[[:alnum:]]{2} [[:alnum:]]{2}" Input_file |
column -t -s $'\t'
CodePudding user response:
It is also possible with sed
. I found sed
in general to be faster than awk
.
cat -n | sed 's/\s*\([0-9]*\)\s*\([0-9a-f]*\) \([0-9a-f]*\) \([0-9a-f]*\) \([0-9a-f]*\) \([0-9a-f]*\) \([0-9a-f]*\) \([0-9a-f]*\) \([0-9a-f]*\)/Sample \1:\t0x\2\3 0x\4\5 0x\6\7 0x\8\9/'