Home > database >  Is using an 'anonymous' threading.Lock() always an error?
Is using an 'anonymous' threading.Lock() always an error?

Time:11-28

I'm trying to make sense of some code and I see this function below

def get_batch(
    self,
) -> Union[Tuple[List[int], torch.Tensor], Tuple[None, None]]:
    """
    Return an inference batch
    """
    with threading.Lock():
        indices: List[int] = []
        for _ in range(self.batch_size):
            try:
                index = self.full_queue.get(timeout=0.05)
                indices.append(index)
            except:
                break

        if indices:
            # tqdm.write(str(len(jobs)))
            batch = {
                key: torch.stack([self.input_buffers[key][index] for index in indices])
                .to(torch.device('cpu'), non_blocking=True)
                .unsqueeze(0)
                for key in self.input_buffers
            }
            return indices, batch
        else:
            return None, None

the with threading.Lock() line must be an error right? Like generally speaking a lock must be shared, and this isn't shared with anything?

CodePudding user response:

Yes, @Homer512's comment nailed it. Each activation of the function creates a new Lock object, and there's no way for those objects to be shared between threads. Nothing is accomplished by locking a Lock that cannot be locked by any other thread. It's effectively a no-op.

  • Related