I tried several ways to transform the text with a column into a summarized csv file based on the contents of the file but I couldn't.
record does not start with comma = first column record that has a comma at the beginning = concatenates with the first
When you find the next record that doesn't have a comma at the beginning, start on another line
Text file to process:
08-ipa_group
,evouth.zip
,zipe.zip
,auth-service.zip
18-ws-api_group
,mocks.zip
,auth-service.zip
,a-service.zip
Output
08-ipa_group,evouth.zip,zipe.zip,auth-service.zip
18-ws-api_group,mocks.zip,auth-service.zip,a-service.zip
CodePudding user response:
Using awk:
awk -v ORS= '
/^[^,]/ {
eol()
}
END {
eol()
}
function eol() {
if (NR > 1)
print "\n"
}
1' file
CodePudding user response:
Here's a pretty straightforward shell script that:
- creates empty string vars line and row
- reads the input file line by line, into the var line
- if the line starts with a comma (
... | grep '^,'
) it gets appended to the var row - else (it's the first-column line), so do something with row
- if row is not empty (its initial state), echo and append that to output.csv
- else set to the first-col data
- finally, append row
rm -rf output.csv
line=''
row=''
while read -r line; do
[ "$line" = '' ] && continue # skip blank lines
if echo "$line" | grep -q '^,'; then # a "comma" line
row="${row}${line}"
else # a "first-col" line
if [ "$row" != '' ]; then # row is not in initial state
echo "$row" >> output.csv
fi
row="$line"
fi
done < input.txt
echo "$row" >> output.csv