Home > Mobile >  Bash scanner Optimization
Bash scanner Optimization

Time:11-07

I'm currently doing a scanner in bash.
The idea is to receive as an input a network and, while iterating through it, ping the IPs and probe the ports to see if they're open.

Var explanation(example):
Input -> Network to scan: 192.168.1.0/24

Starting IP: 192.168.1.0 -> $i1.$i2.$i3.$i4
Last IP: 192.168.1.255 -> $m1.$m2.$m3.$m4

for oct1 in $(seq $i1 1 $m1); do
    for oct2 in $(seq $i2 1 $m2); do
        for oct3 in $(seq $i3 1 $m3); do
            for oct4 in $(seq $i4 1 $m4); do
                 ping -c 1 $oct1.$oct2.$oct3.$oct4 | grep "64 bytes" | cut -d" " -f4 | tr -d ":" >> scan.txt
                 nc -nvz $oct1.$oct2.$oct3.$oct4 80 2>&1 | grep succeeded | cut -d" " -f4 >> scan.txt
             done
         done
     done
done

The file scan.txt looks something like:

192.168.1.1
80
192.168.1.2
192.168.1.4
192.168.1.5
192.168.1.7
80
192.168.1.9
(...)

My only problem is that this solution, although works, takes too much time.
Scanning a 192.168.1.0/24 network, I've found the script to start normally but after scanning around 10 IPs it starts slowing down to the point of almost getting stuck.

I imagine this has something to do with the ping and nc commands leaving jobs running on the background. If I add & disown to the end of the ping and nc commands it runs much smoother but messes with the output.

Instead of going from .1 to .254, it starts looking like:

192.168.1.6
80
192.168.1.10
80

80
192.168.1.1
192.168.1.5
192.168.1.3
(...)

Can this code be optimized (or be done differently) in order to run faster?

CodePudding user response:

ping -c 1 $oct1.$oct2.$oct3.$oct4 | grep "64 bytes" | cut -d" " -f4 | tr -d ":" >> scan.txt

What happens when ping doesn't produce the "64 bytes" message? Your script stops for quite a while. On my system, it takes 10 seconds to fail, but I'm not sure there's anything particularly standard about that timeout.

$ time ping -c1 192.168.0.123
PING 192.168.0.123 (192.168.0.123) 56(84) bytes of data.

--- 192.168.0.123 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms


real    0m10.004s
user    0m0.000s
sys     0m0.000s

Most Linux distributions these days include timeout(1), so you can limit how long your script waits:

$ timeout 2 ping -c1 192.168.0.123 || echo no
PING 192.168.0.123 (192.168.0.123) 56(84) bytes of data.
no

For concurrent processing (pinging more than one host at a time), you might consider using make(1) with the -j option. Use a script to produce a set of addresses, each one a filename, then define a Makefile rule to produce an output file for each input. As a final step, concatenate all the outputs together.

If your input files are .ping and your output files are .pong, then a rule along the lines of:

.ping.pong:
        ping -c 1 $^ \
        | grep "64 bytes" \
        | cut -d" " -f4 \
        | tr -d ":" > $@~
        nc -nvz $^ 80 2>&1 \
        | grep succeeded | cut -d" " -f4 >> $@~
        mv $@~ $@

would "compile" every .ping file to a .pong file. (The above doesn't quite work because you need to remove the suffix from the filename for use as a command-line argument.)

One final word of advice: if you find yourself using grep, cut, sed, and tr in a pipeline, awk(1) is your friend. There are 100 reasons, but maybe the best one is that you can easily write your awk script to let unexpected input "fall through" where you can see it and deal with it. With grep, everything unexpected is discarded, leaving you to guess (or ask SO) what's missing.

  • Related