Home > database >  Python3 multiprocessing pool working on an objects method does not get updated data of object
Python3 multiprocessing pool working on an objects method does not get updated data of object

Time:11-17

I have openCV tracker objects that are getting updated. To make things faster I used multiprocessing pool with the map_async function to parallice the work. It works as expected I get a significant speed up. But 1 thing is strange / does not work. When I reinitialize my trackers with openCV (meaning giving them a new bounding box), the tracker is not updated but continues with the previous bounding box. This only happens when using the multiprocessing pool but does not happen when using a sequential loop over the tracker list. I suspect that the process makes its own copy of the tracker object and thus the reinitialization of the tracker does not apply to that newly created object. However to my understanding when I call the function map_async a new process is created and with process.wait() it is waiting until that process finished its work.

I already tried to create a new pool everytime I call my updateTrackers() function. That did not solve the problem.

Working reinitialization sequential code:

    def updateTrackers(self, frame):
        for t in self.trackers:
            t.update()

Not working reinitialization sequential code:

    def updateTrackers(self, frame):
        processes = []
        # create a process for each tracker
        for t in self.trackers:
            processes.append(self.ProcessPool.map_async(t.update, (frame, )))

        # wait for the processes to finish
        for p in processes:
            p.wait()

The code for reinitializing the tracker object is the same in both cases:

    def reInitTracker(self, index, frame):
        if index >= self.nmbTrackers:
            return

        initBB = cv2.selectROI("Camera view", frame, fromCenter=False,
                showCrosshair=True)
        self.trackers[index].tracker.clear()
        self.trackers[index].tracker = cv2.TrackerKCF_create()
        self.trackers[index].tracker.init(frame, initBB)

EDIT: I just found out that the trackers do not get updated when parallelising them. Which is more consistent now with them also not getting initialized.

CodePudding user response:

If I understand your issue the t (tracker) object is not getting updated when you pass t.update as the worker function in your call to map_async

Yes, t will be serialized/deserialized to the multiprocessing pool for execution of the update method and if t's state is modified by update it will not be reflected back to the main process. As an aside, it is rather unusual to call a map method with an iterable argument consisting of single element as you are doing; apply_async would be more appropriate.

The solutions is for your update method (not shown by you, unfortunately) to return self and to then update the tracker object in the main process with this return value:

    def updateTrackers(self, frame):
        results = [self.ProcessPool.apply_async(t.update, (frame,)) for t in self.trackers]
        # It is assumed that t.update now returns with t.self
        # It is also assumed that self.trackers is a list or otherwise indexable:
        for idx, result in enumerate(results):
            # Wait for task to finish and update tracker with returned value:
            self.trackers[idx] = result.get()

CodePudding user response:

Refering to your BooBoos comment:

No the update method does not add any attribute. And you are right, that is indeed strange. I created a minimal, reproducable example which causes the same error:

import cv2
import numpy as np
from pathos.multiprocessing import ProcessPool

tracker = cv2.TrackerKCF_create()
Pool = ProcessPool()

# dummy values
frame = np.array((100,100,3))
initBB = (1,2,3,4)
tracker.init(frame, initBB)

p = Pool.apipe(tracker.update, frame)
p.get()

The problem seems to be the get method

  • Related