In POSIX environments, when a process exhausts its stack it receives a SIGSEGV
signal.
In my case, my program parses its input with a hand-written recursively descent parser which can exhaust its stack if the input file has a particularly deeply nested grammatical structure (think of the grammar of a programming language and an input file with very deep nested calls like f(f(f(f(f(f(....))))))
).
My program has an option to run it with an increased stack size for cases like this, but I would like to handle SIGSEGV
to suggest the user to use that option when the cause of the SIGSEGV
is stack exhaustion.
Is there any way to tell that the program has received a SIGSEGV
because of this reason instead of any other bug?
CodePudding user response:
Is there any way to tell that the program has received a SIGSEGV because of this reason instead of any other bug?
Yes.
- Record the current stack size limit in
main()
, using getrlimit(RLIMIT_STACK
, ...). Store it in some global. - Record an address of a local variable in
main()
, store it in another global. - When entering parser, take an address of some local there.
- Estimate current stack usage as
(char*)ptr_to_local_in_main - (char*)ptr_to_local_in_parser
.
IFF the current stack usage is within (say) 4-8KiB of the limit, you are in the danger zone. At that point you could:
- Abandon the parse, tell end-user to increase stack, or
- Set a global
in_danger_zone
flag, to be checked bySIGSEGV
handler.
Alternatively, you could just record ptr_to_local_in_parser
, and in your signal handler see whether the delta between ptr_to_local_in_{main,parser}
is large enough that stack overflow is likely.
It's possible in the signal handler to examine ucontext_t
passed into it, disassemble the instruction, figure out whether that instruction manipulates the stack (e.g. PUSH
, CALL
, MOV 0x...(%RSP)
, etc.) and make a more precise determination, but the complexity is probably not worth the additional accuracy you would get this way.