Home > Back-end >  Reading individual rows from a file and using values in another script
Reading individual rows from a file and using values in another script

Time:10-27

I have a file (x.txt) with a single column containing a hundred values.

228.71
245.58
253.71
482.72
616.73
756.74
834.72
858.62
934.61
944.60        
....

I want to input each of these values to an existing script I have such that my command on the terminal is:

script_name 228.71 245.58 253.71........... and so on.

I am trying to create a bash script such that each row is read automatically from the file and taken into the script. I have been trying this for a while now but could not get a solution.

Is there a way to do this?

CodePudding user response:

You could read the text file line by line into a list, stripping any newlines on the way. The subprocess.call call takes a list of the command to run plus any arguments. So create a new list with the values just read and the name of the command.

import sys
import subprocess

values = [line.strip() for line in open("x.txt")]
subprocess.call(["script_name"]   values)

CodePudding user response:

In pure Bash, you could use mapfile/readarray to have an array populated with each line of input file.

For example:

#!/usr/bin/env bash
CMDNAME="${0##*/}"
mapfile -t data < x.txt
printf "%s %s\n" "$CMDNAME" "${data[*]}"

Output with script named foo.sh :

$ ./foo.sh
foo.sh 228.71 245.58 253.71 482.72 616.73 756.74 834.72 858.62 934.61 944.60

CodePudding user response:

There's a standard utility for this.

$ cat x.txt | xargs script_name

Consult the man page for details. Possibly you'd like to adjust the -n value.

  • Related