Home > front end >  Extract data from ldif using awk in shell script
Extract data from ldif using awk in shell script

Time:06-09

Hi I am trying to extract two field from below file in csv format.

dn: lorem ipsum
PersonalId: 123456
Contacts: { big json which may contain "uniquetag:98765432:" within some attributes }

dn: lorem ipsum
PersonalId: 123456
Contacts: { big json which may contain "uniquetag:98765432:" within some attributes }

dn: lorem ipsum
PersonalId: 123456
Contacts: { big json which may contain "uniquetag:98765432:" within some attributes }

dn: lorem ipsum
PersonalId: 123456
Contacts: { big json which may contain "uniquetag:98765432:" within some attributes }

Lorem ipsum
Lorem ipsum

Expected output

123456,98765432
123456,98765432
...

The PersonalId field will always be 6 digit and the mobile number after the uniquetag will always be 8 digit. In case for a record the mobile number is not present then we need to skip that record.

I am able to do it in a simple shell script where I go line by line to find a record and then capture the value using regex: ^.*PersonalId.*([0-9]{6}).*uniquetag:([0-9]{8}).*$. It works and gives the output I need. But for given around 2 million record (300 Mb file size) it takes almost an hour to process. Need to run this on much bigger file size. I did some optimization like splitting the original file into multiple smaller files and process them in parallel. But it doesn't improve performance that much.

I feel like I am doing things very wrong. It could be possible to extract these fields may be just using awk or sed directly. Is there a way to do this directly using awk or sed?

The current shell script that I have

#!/bin/sh

start=$(date  %s)
_input_file="$1"
_output_file="$2"
_parallel_threads="$3"
_debug_enabled="$4"
_user_regex='^.*PersonalId.*([0-9]{6}).*uniquetag:([0-9]{8}).*$'

debug_log() {
    log_message="$1"
    if [[ $_debug_enabled == debug* ]]; then
        printf "%b\n" "$log_message"
    fi
}

process_user_record() {
    user_record="$1"
    output_file_name="$2"

    debug_log "processing user record"
    record=$(echo "$user_record" | tr -d '\n')
    debug_log "$record"
    [[ $record =~ $_user_regex ]] && debug_log "user record matched regex" || debug_log "no match"
    match_count=${#BASH_REMATCH[@]}
    debug_log "Match count is $match_count"
    if [[ $match_count -gt 2 ]]; then
        pid="${BASH_REMATCH[1]}"
        mobile="${BASH_REMATCH[2]}"
        debug_log "Writing $pid and $mobile to output"
        echo "$pid,$mobile" >>$output_file_name
    fi
}

process_record_file() {
    record_file_name="$1"
    output_file="output_files/${record_file_name##*/}"
    user_data=''

    touch "$output_file"
    debug_log "processing: $record_file_name"
    while IFS= read -r line; do
        if [[ $line == dn* ]]; then
            debug_log 'line matches with dn'
            if [ ! -z "$user_data" ]; then
                process_user_record "$user_data" "$output_file"
                user_data=''
            fi
        else
            debug_log "Appending to user data"
            user_data="${user_data}\n${line}"
        fi
    done <"$record_file_name"

    if [ ! -z "$user_data" ]; then
        process_user_record "$user_data" "$output_file"
        user_data=''
    fi
}

echo "pid,mobile" >$_output_file
debug_log 'Starting export'
mkdir 'input_files_split'
mkdir 'output_files'
awk -v max=1000 '{print > sprintf("input_files_split/recordd", int(n/max))} /^$/ {n  = 1}' "$_input_file"
declare -i counter=0
for file in input_files_split/*; do
    if [[ $counter -ge $_parallel_threads ]]; then
        wait
        counter=0
    fi
    process_record_file "$file" &
    counter =1
done
wait
cat output_files/* >>$_output_file
rm -rf input_files_split/
rm -rf output_files/

end=$(date  %s)
runtime=$((end - start))

echo "Export ready\nTime taken: ${runtime}s"

CodePudding user response:

This might work for you (GNU sed):

 sed -nE '/PersonalId:/h
          /\buniquetag:/{
            H;g;s/.*PersonalId:\s*([0-9]{6}).*\n.*uniquetag:\s*([0-9]{8}).*/\1,\2/p}' file

Copy PersonalId info to the hold space.

Append uniquetag info to the PersonalId info and format accordingly.

CodePudding user response:

Using sed

$ sed -n '/PersonalId:/{N;/uniquetag:/s/[^:]*: \(.*\)\n.*uniquetag:\([^:]*\).*/\1,\2/p}' input_file
123456,98765432
123456,98765432
123456,98765432
123456,98765432

Match lines beginning with PersonalID. If the next line matches uniquetag:, then capture the needed data within the capturing parenthesis to be returned with a backslash \1.

  • Related