Home > OS >  How to use locking mechanism to write to file from different scripts
How to use locking mechanism to write to file from different scripts

Time:06-14

I want to achieve the following:

  • one script continuously writes sensor results to a file
  • another script that is running concurrently reads the file every second,locks the file , averages all the values in the file, makes it an empty file and then unlocks it for the previously mentioned script to write sensor results to it again

I'm using Bourne shell scripts and this is patchy solution so far, which combines solutions I've found online. I want to know a few things about what I currently have and would also welcome any suggestions for improvements.

while :
do
(
    flock -x 2
    echo $SENSOR_1_VALUE | tee -a sensor1.txt
)2> sensor1lockfile
done

And the averaging script:

#every 1s:
(
    flock -x 2
    awk '{ total  = $1; count   } END { print total/count }' sensor1.txt
    # empty the file
    > sensor1.txt
) 2> sensor1lockfile

Now, eventually I will have about 10 sensors that will all have their own file sensor1...sensor10 ect and the averaging script will average all these scripts every second and get these averaged values to send to another subsystem.

This may be very stupid, but I tried making the sensor1.txt replace the sensor1lockfile in both the scripts above and this did not work. I would ideally not want to have 20 files (10 for each sensor's values and 10 for the purposes of locking) hence why I tried to use only the values text file as the locking file.

I've also heard that the ( ) for the subshell process is not ideal for performance, what are my alternatives here? I also noted that fd numbers over 9 did not work for sh, which is what I need to use... That in itself may be a problem for my application.

Any help, suggestions or input would be greatly appreciated, thanks.

CodePudding user response:

If you really want to solve this using shell scripts, I think you're actually pretty close. I would rewrite the "sensor reading" script to look something like this:

#!/bin/bash

values="sensor$1.txt"
lockfile="sensor$1.lock"

trap 'rm -f $lockfile' EXIT

while :; do
    (
        flock -x 3
        echo $(( RANDOM % 10 )) >> "$values"
    )3> "$lockfile"

    sleep 0.5
done

This takes as an input parameter a device number, so we only need one script for N sensors. It automatically cleans up lockfiles on exit. Here I'm faking sensor values using $RANDOM, you would of course replace this code with something that actually reads a sensor.

For creating per-sensor averages, we can just loop over the sensor*.txt files, like this:

#!/bin/bash

while :; do
    for values in sensor*.txt; do
        lockfile="${values/.txt/.lock}"
        (
            flock -x 3
            awk -vVALUES="$values" '{ total  = $1; count   } END { printf "%s avg (%d): %f\n", VALUES, count, total/count }' "$values"
            sleep 2
        ) 3> "$lockfile"
    done

    sleep 1
done

I tested these by starting up 10 fake sensors, like this:

$ for i in {1..10}; do sh startsensor.sh $i & done
...
$ jobs
[1]   Running                 sh startsensor.sh $i &
[2]   Running                 sh startsensor.sh $i &
[3]   Running                 sh startsensor.sh $i &
[4]   Running                 sh startsensor.sh $i &
[5]   Running                 sh startsensor.sh $i &
[6]   Running                 sh startsensor.sh $i &
[7]   Running                 sh startsensor.sh $i &
[8]   Running                 sh startsensor.sh $i &
[9]-  Running                 sh startsensor.sh $i &
[10]   Running                 sh startsensor.sh $i &
$ ls
calc_average.sh  sensor1.txt   sensor3.txt   sensor5.txt   sensor7.txt   sensor9.txt
sensor10.lock    sensor2.lock  sensor4.lock  sensor6.lock  sensor8.lock  startsensor.sh
sensor10.txt     sensor2.txt   sensor4.txt   sensor6.txt   sensor8.txt
sensor1.lock     sensor3.lock  sensor5.lock  sensor7.lock  sensor9.lock

With those scripts running, the calc_average.sh script produces output like this:

sensor10.txt avg (11): 5.181818
sensor1.txt avg (11): 3.909091
sensor2.txt avg (10): 3.800000
sensor3.txt avg (10): 2.700000
sensor4.txt avg (10): 4.000000
sensor5.txt avg (10): 3.100000
sensor6.txt avg (10): 4.100000
sensor7.txt avg (10): 3.900000
sensor8.txt avg (10): 4.700000
sensor9.txt avg (10): 4.700000

Note that I'm using fd 3 instead of 3 as the lockfile so that we don't suppress stderr output (which can be useful for identifying errors).

CodePudding user response:

You can simplify:

while :; do
    flock -x 1
    echo "$sensorvalue"
    flock -u 1
done >>"$sensorfile"
while sleep 1; do
    flock -x 0
    awk '{ total =$1; count   } END{ print total/count }' "$sensorfile"
    >"$sensorfile"
    flock -u 0
done <"$sensorfile"
  • Related