While reading the section about exception handling and the compiler flag -fno-exceptions
in the gcc manual, I came across the following lines:
Exception handling overhead can be measured in the size of the executable binary, and varies with the capabilities of the underlying operating system and specific configuration of the C compiler. On recent hardware with GNU system software of the same age, the combined code and data size overhead for enabling exception handling is around 7%.
I tried to reproduce this overhead by compiling some simple C programs (without exception throwing code) using Ubuntu 20.04 and gcc 10.3.0 both with and without the -fno-exceptions
flag, but could not observe any difference whatsoever regarding the size of the compiled binary executables.
So I came to the conclusion that the quoted sentence from the manual is referring only to the binary that is produced when recompiling libstdc files with -fno-exceptions
because in this case, every occurence of try
, catch
and throw
will be replaced by if
... else
branches.
I am not entirely sure about this, so here are my questions:
a) User code being compiled with -fno-exceptions
only prevents using the keywords try
, catch
and throw
and does not generate a smaller binary by itself, right ?
b) User code being compiled with -fno-exceptions
can still be exposed to exceptions being thrown from libstdc functions, if these have not been (re)compiled with -fno-exceptions
, right ?
c) User code being compiled with -fexceptions
(the default) will indeed produce a larger binary because of the generated frame unwind information, but only when exceptions are actually used, right ?
CodePudding user response:
It can reduce the size of the binary, and it often does for larger programs. However, it's not guaranteed to always do so.
a) User code being compiled with -fno-exceptions only prevents using the keywords try, catch and throw and does not generate a smaller binary by itself, right ?
Nope, it definitely has an impact on code generation. However, exceptions do not increase code size indiscriminately. There has to be exceptions possibly involved for frame unwind code to be generated. There also has to be something to do during unwinding (i.e non-trivial destructors). If one or the other is not present, then -fno-exceptions
won't make a difference for that function.
For example, compiling the following will clearly show a smaller code size with -fno-exceptions
.
#include <vector>
void foo(); // could potentially throw.
void bar() {
std::vector<int> v(12); // has non-trivial destructor.
foo();
}
see on godbolt
Notice how each of the following changes eliminates the exception handling code:
- changing the declaration of
foo()
tovoid foo() noexcept;
. - providing a non-throwing implementation of
foo()
in the same TU:void foo() {}
- moving the construction of the vector to after
foo()
is called.
b) User code being compiled with -fno-exceptions can still be exposed to exceptions being thrown from libstdc functions, if these have not been (re)compiled with -fno-exceptions, right ?
Any exception thrown by libstdc that bubbles up into code compiled with -fno-exceptions
will result in the program being immediately terminated. If that counts as "being exposed" to you, then yes.
Also, be aware that a large portion of libstdc is implemented directly in headers, and your compiler flags are going to be applied to that portion of the library.
c) User code being compiled with -fexceptions (the default) will indeed produce a larger binary because of the generated frame unwind information, but only when exceptions are actually used, right ?
Close but not quite. The code will be emitted anywhere an exception might be thrown. This includes any call to a function without noexcept
that is defined in a different TU than the one being compiled.