Home > Back-end >  counter variable in file for bash script
counter variable in file for bash script

Time:11-28

I want to run several commands independently in different bash scripts. If they are all finished the computer should shut down. Therefor I created a counter, called "n" in a txt file, which goes up one number every time a script is executed and goes down one number after finishing. If the counter is not zero the script shouldn't shutdown my computer.

#!/bin/bash
source /home/user/bin/log/counter.txt
$n = $n   1
echo "backup"
$n = $n -1
if [ "$n" == "0" ] ; then
    echo "shutdown"
    #shutdown -P now
else
    exit 0
fi

CodePudding user response:

The approach outlined in the question can’t work for multiple reasons:

  • It doesn’t update the counter file. Each script reads the counter value, then locally and independently increments and decrements it and has no effect on the other scripts.
  • To maintain a shared counter in a file, atomic updates to the file are necessary. Otherwise the well-known readmodifywrite race condition will render the counter useless.

A possible implementation of increment and decrement operations using Bash and flock on the counter file itself:

#!/bin/bash
set -euo pipefail

_atomically_modify() {
  local -r file_name="$1"                 # file name for descriptor 0
  local -r operation="$2"                 # the transformation to perform
  flock 0                                 # atomic from here until return
  trap 'trap - return; flock -u 0' return
  local after
  after="$("$operation")"                 # read   modify
  printf '%s' "$after" > "$file_name"     # write
  printf '%s' "$after"                    # fetch (for the caller to use)
}

_increment_operation() { printf '%d' "$(("$(< /dev/stdin)"   1))"; }
_decrement_operation() { printf '%d' "$(("$(< /dev/stdin)" - 1))"; }

increment() { _atomically_modify "$1" '_increment_operation' < "$1"; }
decrement() { _atomically_modify "$1" '_decrement_operation' < "$1"; }

It’s good to torture-test this a bit. Let’s take n_processes ( 1) parallel processes, each of which increments and decrements the counter n_rmw times before decrementing it one last time:

#!/bin/bash
set -euo pipefail

declare -ri n_processes=20
declare -ri n_rmw=30
declare -i i j counter
counter_file="$(mktemp)"
declare -r counter_file
trap 'rm "$counter_file"' exit

printf '%d' "$((n_processes   1))" > "$counter_file"
for ((i = 0; i < n_processes;   i)); do (
  for ((j = 0; j < n_rmw;   j)); do       # play with the counter
    increment "$counter_file" > /dev/null
    decrement "$counter_file" > /dev/null
  done
  counter="$(decrement "$counter_file")"  # work finished
  ((! counter)) && printf 'Process %d finished last.\n' "$i"
)& done
counter="$(decrement "$counter_file")"    # track the starter process
((! counter)) && printf 'The starter process finished last.\n'

wait                                      # only to check the counter
printf 'Final counter value: %d\n' "$(< "$counter_file")"

The final counter value is zero, as expected. Now try to run the same experiment with flock 0 and the following line removed. That (mostly) won’t work as expected.

Important facts to note:

  1. The initial counter “increment” is performed by the “main” starter process (printf '%d' "$((n_processes 1))" > "$counter_file"), not by the individual “worker” processes started later.
  2. The “main” starter process also participates in the “counting”, which is the reason for n_processes 1.

The two facts above combined make it possible to avoid two (closely related) and well-known questions:

  1. When the “main” starter process sees a counter value of zero at the end, does it mean that all “worker” processes have finished or that no “worker” processes have started yet?
    [The former. All “worker” processes have finished in that scenario.]
  2. More generally, when any process sees a counter value of zero at the end, can it safely call shutdown (as mentioned in the question) or can it be the case that the “main” starter process has not finished starting all “worker” processes yet?
    [The former. All processes, including the “main” starter process, have finished in that scenario.]

CodePudding user response:

Decrementing content of a file causes a race condition between all actors that want to increment, because they need to open the file, calculate, then write to it. This requires locking and is hard to do.

This is a log directory. Consider a different design, where you do not care what the current state is. You only add to the state. Each script can create a separate file to notify it's done, or every script writes a line to a "done" file.

echo "$0" >> /home/user/bin/log/done.txt
lines=$(wc -l </home/user/bin/log/done.txt)
if ((lines == n)); then
     echo "shutdown"
fi

For anything more, consider using a database mysql or other to store the state, or use Airflow or other scheduling system.

CodePudding user response:

Now it works.

#!/bin/bash
source bin/log/counter.txt
n=$((n 1))
echo "backup"
n=$((n-1))
if [ $n == "0" ] ; then
    echo "shutdown"
    #shutdown -P now
else
    exit 0
fi
  • Related