I have run in to an issue in my program where exceptions are not thrown after launching a nested subprocess. That is, I have a method that I run in a multiprocessing.Process. Inside of that process ("the first process"), I launch another multiprocessing.Process. After calling start on the second process, the first process continues but does not raise exceptions, instead hanging when an exception is raised.
I cannot tell if this is expected behavior or I have stumbled across a bug in Python. Here is a minimal example to demonstrate the issue:
import multiprocessing as mp
import time
def spin_proc():
while True:
print("spin")
time.sleep(1)
def issue():
sub_proc = mp.Process(target=spin_proc)
sub_proc.start()
print("Exception called next.") # This text is printed, as expected.
raise Exception() # This exception is never raised.
print("This text is not printed.")
first_proc = mp.Process(target=issue)
first_proc.start()
The output is:
$ python3 except.py
Exception called next.
spin
spin
spin
spin
And it will continue printing "spin" indefinitely until I hit Ctrl-C. I would have expected the exception to get raised and crash the program.
I have confirmed this behavior in Python 3.8.10 and Python 3.9.2.
Is this expected behavior? If so, why? It has the effect of hiding exceptions in my code and unexpectedly blocking, which was very frustrating when trying to debug something new and not seeing any output. If it appears to be a bug, please let me know and I will report it to the Python bug tracker.
Thank you! And here is the output when pressing Ctrl-C. It seems it wanted to throw the Exception, but was hanging for some reason:
^CError in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
Process Process-1:1:
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "except.py", line 7, in spin_proc
time.sleep(1)
KeyboardInterrupt
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "except.py", line 13, in issue
raise Exception() # This exception is never raised.
Exception
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 318, in _bootstrap
util._exit_function()
File "/usr/lib/python3.8/multiprocessing/util.py", line 357, in _exit_function
p.join()
File "/usr/lib/python3.8/multiprocessing/process.py", line 149, in join
res = self._popen.wait(timeout)
File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 47, in wait
return self.poll(os.WNOHANG if timeout == 0.0 else 0)
File "/usr/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
CodePudding user response:
Okay it seems this was probably user error on my part. I posted this question to twitter where user @strinkleydimbu1 pointed out that I could produce the expected behavior with the daemon flag. Specifically you can change this line:
sub_proc = mp.Process(target=spin_proc)
to
sub_proc = mp.Process(target=spin_proc, daemon=True)
And the program behaves as expected. The daemon flag tells python to kill all children if the parent dies. Without this, it must have been waiting indefinitely for the child to finish. It was surprising to me that exceptions would just hang instead of doing anything, but since I have not thoroughly read the docs I can't say if the behavior is truly unexpected.