If I have a file containing this:
[
{[ ...,..., ]},
{[ ...,..., ]},
{[ ...,..., ]},
{[ ...,..., ]},
]
I want to remove that very last comma so I get:
[
{[ ...,..., ]},
{[ ...,..., ]},
{[ ...,..., ]},
{[ ...,..., ]} <----- no comma
]
I can't find a simple way to do this so far - most solutions remove the whole line or do line searching which does not match what I am trying. I feel like it would be a sed
or awk
command.
CodePudding user response:
rev file | sed -z 's/,//' | rev
Reverse the file, remove the first comma in zero separated stream, so the first comma in the whole file, and than reverse it again.
CodePudding user response:
Using ed
, which, unlike sed
, is intended for editing files and thus allows for things like searching backwards in a file:
printf "%s\n" '?,$?s/,$//' w | ed -s file
will find the last line ending with a comma and remove that comma, then save the file.
I agree with @Barmar in the comments though that a better solution is fixing whatever is generating your file to not include that trailing comma in the first place.
CodePudding user response:
With GNU sed:
sed -z 's/},\n]/}\n]/' file
Output:
[ {[ ...,..., ]}, {[ ...,..., ]}, {[ ...,..., ]}, {[ ...,..., ]} ]
CodePudding user response:
With shell parameter expansion, to blindly remove the last comma:
contents=$(< file)
fixed=${contents%,*}${contents##*,}
echo "$fixed"
Using text tools: reverse the file, remove the ending comma on the first line that has one, then re-reverse the file
tac file | awk '!p && /,$/ {sub(/,$/, ""); p = 1} 1' | tac
CodePudding user response:
A little different take that should allow for the desired output even if the ',' is not the last character of the line.
Find the last line containing a comma and capture the line number:
line_no=$(grep -n ',' t.dat | tail -n 1 | cut -f1 -d:)
Then remove the last comma from that specific line no matter where it occurs:
awk -v li="${line_no}" 'BEGIN{ FS=OFS="," } NR==li{ for(i=1; i<NF-1; i ). printf "%s", $i OFS; printf "%s\n", $i "" $NF; next }1' t.dat
This can be combined as:
awk -v li="$(grep -n ',' t.dat | tail -n 1 | cut -f1 -d:)" 'BEGIN{ FS=OFS="," } NR==li{ for(i=1; i<NF-1; i ) printf "%s", $i OFS; printf "%s\n", $i "" $NF; next
}1' t.dat
t.dat contents:
[
{[ ...,..., ]},
{[ ...,..., ]},
{[ ...,..., ]},
{[ ...,..., ]},
]
Process t.dat:
$ awk -v li="$(grep -n ',' t.dat | tail -n 1 | cut -f1 -d:)" 'BEGIN{ FS=OFS="," }
NR==li{ for(i=1; i<NF-1; i ) printf "%s", $i OFS;
printf "%s\n", $i "" $NF; next
}1' t.dat
Output:
[
{[ ...,..., ]},
{[ ...,..., ]},
{[ ...,..., ]},
{[ ...,..., ]}
]
CodePudding user response:
if the entire file ends with that chunk, then just do
gawk FS RS='[]][}],\n[]]\n' ORS=']}\n]\n'
nawk $$ RS='[]][}],\n[]]\n' ORS=']}\n]\n'
mawk //_ RS='[]][}],\n[]]\n' ORS=']}\n]\n'
[
{[ ...,..., ]},
{[ ...,..., ]},
{[ ...,..., ]},
{[ ...,..., ]}
]
CodePudding user response:
awk '/,/ && count==1{gsub(/,$/,"")}1' <(tac file)|tac
CodePudding user response:
Using sed
$ sed -z 's/\(.*\),/\1/' input_file
or
$ sed 'N;s/,\(\n]\)/\1/' input_file
Output
[
{[ ...,..., ]},
{[ ...,..., ]},
{[ ...,..., ]},
{[ ...,..., ]}
]