Home > OS >  create 100 files containing "1" without for loop
create 100 files containing "1" without for loop

Time:10-11

I was trying to make a script that would create a hundred files called log-01, log-02, log-03 etc containing "1" in each of them without any loops, but it always gave an error "ambiguous redirect

#!bin/bash
echo "1">log-{01..100}

I tried putting a $ before the log-{01..100} or placing it in quotes but nothing helps.

CodePudding user response:

Output can only be redirected to a single file, not multiple files.

If you want multiple output files, you can use tee for that:

#!/bin/bash
echo '1' | tee log-{01..99} log-100

This will create files named log-01, log-02, …, log-98, log-99, and log-100.

Note that the above will only work with bash starting from version 4.0. If your bash is older and does not support formatting brace expansions with leading zeros, you can use plain old shell with command substitution:

#!/bin/sh
echo '1' | tee $(printf 'log-d ' $(seq 100))
# or formatting with GNU sed directly:
echo '1' | tee $(seq -f 'log-g' 100)

This is one of the few use cases where the expansion must not be quoted, so it can be field-split after expansion.

CodePudding user response:

You cannot redirect to multiple files. You can use tee, as already pointed out, or you can do something like this:

printf '%s\n' {01..100} | xargs -P 100 I {} sh -c 'echo 1 > log-{}'

You can use -P to control how many processes to run in parallel.

Or, if you're using a platform that supports it, such as most Linux distributions, you can use GNU parallel:

printf '%s\n' {01..100} | parallel "echo 1 > log-{}"

Or you can use a better tool that the shell that provides easier ways of doing this. For example, in awk:

awk 'BEGIN{for(i=1;i<=100;i  ){print "1" > "log-"sprintf("%.2d",i) }}'

You might have a problem with too many open files on some systems, but not if you do this using gawk (GNU awk). If you can't use gawk, try:

mawk 'BEGIN{for(i=1;i<=100;i  ){file="log-"sprintf("%.2d",i); print "1" > file; close(file)}}'

The main advantage of the awk approaches is speed. For instance, if creating 10 thousand files:

$ time ( printf '%s\n' {01..10000} | xargs -P 100 -I {} sh -c 'echo 1 > log-{}' )

real    0m4.375s
user    0m20.996s
sys     0m7.308s


$ time ( printf '%s\n' {01..10000} | parallel -j 100 "echo 1 > log-{}")

real    0m12.640s
user    0m21.504s
sys     0m12.414s


$ time gawk 'BEGIN{for(i=1;i<=10000;i  ){print "1" > "log-"sprintf("%.2d",i) }}'

real    0m0.954s
user    0m0.803s
sys     0m0.148s


$ time gawk 'BEGIN{for(i=1;i<=10000;i  ){f="log-"sprintf("%.2d",i); print "1" > f; close(f) }}'

real    0m0.133s
user    0m0.020s
sys     0m0.109s

As you can see above, awk is significantly faster even when running the other tools with 100 jobs in parallel. The shell is slow.

CodePudding user response:

Your command echo "1">log-{01..100} is expanded by bash to the equivalent line:

echo "1">log-001 log-002 log-003 log-004 log-005 log-006

and that structure is weird/ambiguous/wrong for your purpose.

A good solution is to use the tee command as suggested by @knittl. tee gets a list of files and writes in them the input received by standard input:

echo "1" | tee log-{1..100}

CodePudding user response:

Check this:

seq 100 | xargs -i sh -c 'inputNo=$(printf d {}); echo "1" > log-$inputNo'

xargs is God's command.

CodePudding user response:

Yet another solution in bash:

. <(printf 'echo 1 >log-%s\n' {01..99} 100)
  • Related