Home > Mobile >  Which blocking queue to use with ThreadPoolExecutor? Any advantages with fixed capacity LinkedBlocki
Which blocking queue to use with ThreadPoolExecutor? Any advantages with fixed capacity LinkedBlocki

Time:01-16

I see from the java docs -

ThreadPoolExecutor(int corePoolSize,
                              int maximumPoolSize,
                              long keepAliveTime,
                              TimeUnit unit,
                              BlockingQueue<Runnable> workQueue,
                              RejectedExecutionHandler handler)

Where -

workQueue – the queue to use for holding tasks before they are executed. This queue will hold only the Runnable tasks submitted by the execute method.

Now java provides various type of blocking queues and the java doc clearly say when to use what type of queue with ThreadPoolExecutor-

Queuing
    Any BlockingQueue may be used to transfer and hold submitted tasks. The use of this queue interacts with pool sizing:

        If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing.
        If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread.
        If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected.

    There are three general strategies for queuing:

        Direct handoffs. A good default choice for a work queue is a SynchronousQueue that hands off tasks to threads without otherwise holding them. Here, an attempt to queue a task will fail if no threads are immediately available to run it, so a new thread will be constructed. This policy avoids lockups when handling sets of requests that might have internal dependencies. Direct handoffs generally require unbounded maximumPoolSizes to avoid rejection of new submitted tasks. This in turn admits the possibility of unbounded thread growth when commands continue to arrive on average faster than they can be processed.
        Unbounded queues. Using an unbounded queue (for example a LinkedBlockingQueue without a predefined capacity) will cause new tasks to wait in the queue when all corePoolSize threads are busy. Thus, no more than corePoolSize threads will ever be created. (And the value of the maximumPoolSize therefore doesn't have any effect.) This may be appropriate when each task is completely independent of others, so tasks cannot affect each others execution; for example, in a web page server. While this style of queuing can be useful in smoothing out transient bursts of requests, it admits the possibility of unbounded work queue growth when commands continue to arrive on average faster than they can be processed.
        Bounded queues. A bounded queue (for example, an ArrayBlockingQueue) helps prevent resource exhaustion when used with finite maximumPoolSizes, but can be more difficult to tune and control. Queue sizes and maximum pool sizes may be traded off for each other: Using large queues and small pools minimizes CPU usage, OS resources, and context-switching overhead, but can lead to artificially low throughput. If tasks frequently block (for example if they are I/O bound), a system may be able to schedule time for more threads than you otherwise allow. Use of small queues generally requires larger pool sizes, which keeps CPUs busier but may encounter unacceptable scheduling overhead, which also decreases throughput.

Below is my Question -

I have seen code usages as below -

BlockingQueue<Runnable> workQueue = new LinkedBlockingDeque<>(90);

ExecutorService executorService = new ThreadPoolExecutor(1, 10, 30,
                        TimeUnit.SECONDS, workQueue,
                        new ThreadPoolExecutor.CallerRunsPolicy());

So, as the Deque (in the above code) is anyway of fixed capacity. What advantage am I getting with LinkedBlockingDeque<>(90) when compared to below -

  1. LinkedBlockingQueue<>(90) ; - just want to know about deque advantage over queue in this case not in general. How the Executor will benefit for a deque over a queue.
  2. ArrayBlockingQueue<>(90) ; - (i see one can also mention fairness etc but this not of my current interest) So why not just use an Array over Deque (i.e when using a deque of fixed capacity).

CodePudding user response:

  • LinkedBlockingQueue is an optionally-bounded blocking queue based on linked nodes. Its capacity is not limited.
  • ArrayBlockingQueue is bounded blocking queue in which a fixed-sized array holds elements.

In your case, there's no benefit anywhere. ArrayBlockingQueue may prove to be more efficient, as it uses fixed-size array in a single memory span.

Difference between Queue and Deque is just it's mechanism. Queue is LIFO while Deque is FIFO.

  • In LIFO the last task inserted is the last to be executed
  • In FIFO the last task inserted is the first one to be executed

Consider the following: You want your tasks to be executed as they come in? Use LIFO. You want your tasks to be executed the other way around? use FIFO.

CodePudding user response:

The main benefit is when you're using the thread pool to execute some kind of a pipeline. As a rule of thumb, at each stage in a pipeline, the queue either is almost always empty (producer(s) tend(s) to be slower than the consumer(s)), or else the queue almost always is full (producer(s) tend(s) to be faster.)

If the producer(s) is/are faster, and if the application is meant to continue running indefinitely, then you need a fixed-size, blocking queue to put "back pressure" on the producers. If there was no back pressure, then the queue would continue to grow until eventually, some bad thing happened. (e.g., process runs out of memory, or system breaks down because "tasks" spend too much time delayed in the queues.)

  • Related