Suppose we have the two files namely mymanger.py
and mysub.py
.
mymanager.py
import time
from multiprocessing import Process
import mysub # the process file
def main():
xprocess = Process(
target=mysub.main,
)
xprocess.start()
xprocess.join()
time.sleep(1)
print(f"== Done, errorcode is {xprocess.exitcode} ==")
if __name__ == '__main__':
main()
mysub.py
import sys
def myexception(exc_type, exc_value, exc_traceback):
print("I want this to be printed!")
print("Uncaught exception", exc_type, exc_value, exc_traceback)
def main():
sys.excepthook = myexception # !!!
raise ValueError()
if __name__ == "__main__":
sys.exit()
When executing mymanager.py
the resulting output is:
Process Process-1:
Traceback (most recent call last):
File "c:\program files\python\3.9\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "c:\program files\python\3.9\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\lx\mysub.py", line 11, in main
raise ValueError()
ValueError
== Done, errorcode is 1 ==
When the output i expected would be something like:
I want this to be printed!
Uncaught exception <class 'ValueError'> <traceback object at 0x0000027B6F952780>
which is what i get if i execute main
from mysub.py
without the multiprocessing.Process
.
I've checked the underlying cpython (reference) and the problem seems to be that the try-except
in the _boostrap
function takes precedence over my child processes sys.excepthook
but from my understanding, shouldn't the excepthook from the childs process fire first and then trigger the except
from the _boostrap
?
I need the child process to handle the exception using the sys.excepthook
function.
How can i achieve that?
CodePudding user response:
sys.excepthook
is invoked when an exception goes uncaught (bubbling all the way out of the running program). But Process
objects run their target function in a special bootstrap function (BaseProcess._bootstrap
if it matters to you) that intentionally catches all exceptions, prints information about the failing process plus the traceback, then returns an exit code to the caller (a launcher that varies by start method).
When using the fork
start method, the caller of _bootstrap
then exits the worker with os._exit(code)
(a "hard exit" command which bypasses the normal exception handling system, though since your exception was already caught and handled this hardly matters). When using 'spawn'
, it uses plain sys.exit
over os._exit
, but AFAICT the SystemExit
exception that sys.exit
is implemented in terms of is special cased in the interpreter so it doesn't pass through sys.excepthook
when uncaught (presumably because it being implemented via exceptions is considered an implementation detail; when you ask to exit the program it's not the same as dying with an unexpected exception).
Summarizing: No matter the start method, there is no possible way for an exception raised by your code to be "unhandled" (for the purposes of reaching sys.excepthook
), because multiprocessing
handles all exceptions your function can throw on its own. It's theoretically possible to have an excepthook
you set in the worker execute for exceptions raised after your target
completes if the multiprocessing
wrapper code itself raises an exception, but only if you do pathological things like replace the definition of os._exit
or sys.exit
(and it would only report the horrible things that happened because you replaced them, your own exception was already swallowed by that point, so don't do that).
If you really want to do this, the closest you could get would be to explicitly catch exceptions and manually call your handler. A simple wrapper function would allow this for instance:
def handle_exceptions_with(excepthook, target, /, *args, **kwargs)
try:
target(*args, **kwargs)
except:
excepthook(*sys.exc_info())
raise # Or maybe convert to sys.exit(1) if you don't want multiprocessing to print it again
changing your Process
launch to:
xprocess = Process(
target=mysub.handle_exceptions_with,
args=(mysub.myexception, mysub.main)
)
Or for one-off use, just be lazy and only rewrite mysub.main
as:
def main():
try:
raise ValueError()
except:
myexception(*sys.exc_info())
raise # Or maybe convert to sys.exit(1) if you don't want multiprocessing to print it again
and leave everything else untouched. You could still set your handler in sys.excepthook
and/or threading.excepthook()
(to handle cases where a thread launched in the worker process might die with an unhandled exception), but it won't apply to the main thread of the worker process (or more precisely, there's no way for an exception to reach it).