I have been having this weird issue where my new laptop handles an albeit computationally intensive program I wrote worse than my last one. The code is a python program I wrote that measures errors in a numerical model for a large set of data. It uses the threading library to try and speed some parallelizable tasks up. The older laptop has an i7-8750h and 16gb ddr4 ram. The newer laptop has a i9-12900h and 40gb ddr5 ram (16 gb running quad channel, 24 gb running dual channel). In theory the this new laptop should be able to outperform the old one by a pretty significant margin, however it struggles significantly more with the program than the old one. I was wonder if anyone had any ideas about why this could be. My only thought is that the threading library interacts weirdly with the new intel 12th gen performance core and efficiency core setup.
I have tried restarting my newer laptop, and it has been handling other tasks very well, it seems to only have issues with this. I have also downloaded cpu-z and the benchmarks seem to be around normal.
Edit: The program usually took ~90 min to run on my last laptop, but there was randomity involved so it was inconsistent between runs. On the new one it gets through about 3% of the program in 90 minutes.
Ill try and upload an MRE soon.
CodePudding user response:
You have not share the code so cannot pinpoint the issue, but here is the general explanation
Python multithreading does not guaranty parallelism due to Global Interpreter Lock limitation so at any given time only single thread is running inside the Python process.
Implementing multi threading for CPU bound operations can indeed sometime add more overhead. Hence you must use multiprocessing to achieve parallelism in true sense and improve the performance of your code. In multiprocessing each process has its own Python interpreter so they all can execute in parallel.
Multithreading only makes sense for IO bound operations.