Home > Software engineering >  Linux Shell Bash Spilt Text file by columes
Linux Shell Bash Spilt Text file by columes

Time:12-08

I have a massive list of english words that looks something like this.

1.    do accomplish, prepare, resolve, work out
2.    say suggest, disclose, answer
3.    go continue, move, lead
4.    get bring, attain, catch, become
5.    make create, cause, prepare, invest
6.    know understand, appreciate, experience, identify
7.    think contemplate, remember, judge, consider
8.    take accept, steal, buy, endure
9.    see detect, comprehend, scan
10.    come happen, appear, extend, occur
11.    want choose, prefer, require, wish
12.    look glance, notice, peer, read
13.    use accept, apply, handle, work
14.    find detect, discover, notice, uncover
15.    give grant, award, issue
16.    tell confess, explain, inform, reveal

And I would like to be able to extract the second colum,

do
say
go
get
make
know
think
take
see
comer
want
look
use
find
give
tell

anybody know how to do this in shell bash.

Thanks.

CodePudding user response:

Using bash

$ cat tst.sh
#!/usr/bin/env bash

while read line; do 
    line=${line//[0-9.]} 
    line=${line/,*}  
    echo ${line% *} 
done < /path/to/input_file

$ ./tst.sh
do
say
go
get
make
know
think
take
see
come
want
look
use
find
give
tell

Using sed

$ sed 's/[^a-z]*\([^ ]*\).*/\1/' input_file
do
say
go
get
make
know
think
take
see
come
want
look
use
find
give
tell

CodePudding user response:

There are a lot of ways to do that:

awk '{print $2}' input-file
cut -d ' ' -f 5 input-file  # assuming 4 spaces between the first columns
< input-file tr -s ' ' | cut -d ' ' -f 2
< input-file tr -s ' ' \\t | cut -f 2
perl -lane 'print $F[1]' input-file
sed 's/[^ ]*  *\([^ ]*\).*/\1/' input-file
while read a b c; do printf '%s\n' "$b"; done < input-file

CodePudding user response:

What about awk:

cat file.txt | awk '{print $2}'

Does this work?

  • Related