Consider an ideal case where a script has been written with
use strict;
use warnings FATAL => 'all';
and thoroughly reviewed, tested and debugged and you are happy with how it works.
There is no dynamically generated code or other fancy stuff in the script, and the code is overall simple and low-tech.
For critical applications, would such a script be as correct and safe with checks commented out:
# use strict;
# use warnings FATAL => 'all';
as it is with them on?
Provided that special caution is taken during edits, upgrades and any other maintenance to re-enable both use strict
and use warnings
and re-test.
Edit:
IMHO the correctness in question is worth answering, no matter whether the original reason justifies the hassle in the opinion of the reader. Replies like "you should use it coz you don't loose much" or "you just should coz it's best practice" are non-answers. Let's take a fresh unbiased look at whether strict
and warnings
are undoubtedly recommended to be kept on in an already debugged script
The reason is performance time to finish task penalties that these pragmas introduce.
Update
For a script that does its job quickly and is called numerous times and response time is significant, — cumulative effect can make a difference.
Update: time to finish task penalties
CPU is i5-3320M, OS is OpenBSD 7.2 amd64.
for i in $(seq 3); do
time for i in $(seq 10000); do
/usr/bin/perl -e ';'
done
sleep 3
done
sleep 3
for i in $(seq 3); do
time for i in $(seq 10000); do
/usr/bin/perl -e 'use strict; use warnings;'
done
sleep 3
done
perl is v5.32.1, vendor-patched for security (read against performance).
3 passes of 10000 of /usr/bin/perl -e ';'
:
1m32.01s real 0m00.60s user 0m03.24s system
1m32.60s real 0m00.70s user 0m03.42s system
1m31.53s real 0m00.69s user 0m04.17s system
3 passes of 10000 of /usr/bin/perl -e 'use strict; use warnings;'
:
2m46.08s real 0m00.72s user 0m04.63s system
2m48.99s real 0m00.61s user 0m04.79s system
2m49.64s real 0m00.75s user 0m05.16s system
Roughly 75 seconds stopwatch time difference for 10000 invocations.
Same shell command but perlbrew
-installed /opt/p5/perlbrew/perls/perl-5.36.0/bin/perl
instead of vendor /usr/bin/perl
:
3 passes of 10000 of /opt/p5/perlbrew/perls/perl-5.36.0/bin/perl -e ';'
:
1m09.31s real 0m00.48s user 0m02.60s system
1m12.06s real 0m00.49s user 0m02.94s system
1m14.81s real 0m00.70s user 0m03.44s system
3 passes of 10000 of /opt/p5/perlbrew/perls/perl-5.36.0/bin/perl -e 'use strict; use warnings;'
:
2m20.81s real 0m00.55s user 0m04.03s system
2m21.98s real 0m00.72s user 0m04.26s system
2m21.75s real 0m00.58s user 0m03.86s system
Roughly 70 seconds stopwatch time difference for 10000 invocations.
For those who find time taken to be too long, it is due to OpenBSD. I had done some measurements, and perl 'hello world' turned out to be 8.150 / 1.688 = 4.8 times slower on OpenBSD than on antiX Linux
Update 2: strict
-only time to finish task penalties
/usr/bin/perl -e 'use strict;'
:
1m59.70s real 0m00.51s user 0m04.27s system
1m59.36s real 0m00.58s user 0m04.04s system
1m57.58s real 0m00.63s user 0m04.50s system
Roughly 26 seconds stopwatch time overhead for 10000 invocations.
/opt/p5/perlbrew/perls/perl-5.36.0/bin/perl -e 'use strict;'
:
1m29.06s real 0m00.59s user 0m04.55s system
1m30.04s real 0m00.52s user 0m04.57s system
1m31.26s real 0m00.54s user 0m05.30s system
Roughly 20 seconds stopwatch time overhead for 10000 invocations.
Update 3
Up till now, replies are mostly evasive pointing out negligibility of performance penalties. One may or may not care about 7 milliseconds per invocation or 70 seconds for 10K invocations. Whatever. Please disregard the provided reason or any other possible reason and focus on the actual question about correctness, as it itself deserves a solid answer
CodePudding user response:
First of all, neither pragma introduce any performance penalty. They simply set flags which are only checked when the exceptional situations occur, and they are checked whether the pragmas were used or not. So the whole question relies on a false premise.
But to answer your question, both of these pragmas have a run-time effect, so whether removing them with make a difference depends on the thoroughness of your tests. They probably aren't complete, so a difference is possible, even likely.
CodePudding user response:
strict
and warnings
are developer tools. If you are done developing and everything is clean, you don't need them anymore. Note what @ikegami has already said, though.
In certain environments where all standard error is logged, you have the possibility of some new perl, changed setting, or untested code path emitting warnings. I've had one situation in my career where a formerly clean script started emitting tons of warnings after a perl upgrade. This eventually filled up the disk and brought the service down. That was not fun. If no one is monitoring the warnings, it's pointless to emit them. But, the lesson here is proper monitoring.
I don't think warnings should be enabled in production code because you should have either fixed them or decided to ignore them. Sometime, but rarely, I'll turn off warnings in a very small scope because the fix would make the code harder to read or cause other problems:
{
no warnings qw(uninitialized);
....
}
But really, I usually just fix warnings and leave warnings enabled. I stopped caring around Perl v5.12 which turns on warnings for free:
use v5.12; # free warnings
I care more about specifying the minimal Perl version than removing a use warnings
or adding a no warnings
line.
And, with v5.36, I get strictures when I specify that as the minimal version:
use v5.36; # free warnings and free strict
Finally, your stated penalty is 7ms. If that's the hot path in your code , you're a lucky person. If you are worried about start up time and need that 7ms, there are other things you should be doing to reclaim that start up time.
But, remember that a one time benchmark on a multi-user, multi-process machine, even if you did run it for a couple seconds, is tainted by anything else going on. If you can repeatedly show the 7ms delay through all sorts of loads and situations, then we should believe that. In my own testing of the same thing on my MacBook Pro, I see differences in as much as 30%. I attribute most of that to operating system level stuff happening when I decide to do the test.