Home > database >  stream stdout to file without increase file size
stream stdout to file without increase file size

Time:09-19

I have a small application, which streams a lot of lines via stdout. After 30 minutes, my disk is full. Any ideas how I can stream the stdout to a file and "tail" it is the same time on another application?

Currently, I have to push the stdout to a file, but it increases a lot

    node app.js>/tmp/p
    tail -f /tmp/p

but with this, I have a disk problem any ideas would be awesome

CodePudding user response:

tee read from standard input and write to standard output and files:

node app.js | tee /tmp/p

You can truncate the file /tmp/p as you please without interrupting your stdout stream.

Another option would be to double buffer, write x lines to a set of fixed rotating set of files. split --filter=... with a suitable script would do it. For example:

node app.js | split -d --filter='cat > $((${FILE/x} % 2))'

You want to set -a sufficiently large that you don't run out of numbers (default is 2; so 99 files), and adjust -l to more than 1000 probably. It blows out if it runs out of sequence numbers, though, so maybe just write a script? double_buffer:

#!/bin/bash

prefix=$1
if [ -z "$prefix" ]
then
    echo usage: double-buffer prefix [lines] [sequences]
    exit 1
fi
lines=${2-1000}
sequences=${3-2}
sequence=0
line=0
while read
do
    if [ "$(echo "$line % $lines" | bc)" -eq 0 ]
    then
        if [ "$line" -ne 0 ]
        then
            sequence=$(echo "($sequence   1) % $sequences" | bc)
        fi
        : > "$prefix-$sequence"
    fi
    echo "$REPLY" >> "$prefix-$sequence" 
    line=$(echo "$line 1" | bc)
done

which you would call with:

node app.js | double_buffer /tmp/p 5000 3

To write 5k lines to /tmp/p-0, then the next 5k lines to p-1, then p-2 then overwrite p-0 etc. bc is arbitrary precision unlike bash integers which max out at 2^63.

  • Related