A thread pool that novices can understand at a glance

A thread pool that novices can understand at a glance

I believe everyone can feel that using multithreading is actually very troublesome, including thread creation, destruction, and scheduling, etc., and we don’t seem to create a new thread in this way when we usually work. In fact, this is because many frameworks use thread pools at the bottom.

[[342941]]

The thread pool is a tool that helps us manage threads. It maintains multiple threads, which can reduce resource consumption and improve system performance.

And by using the thread pool, our developers can better focus on the task code without worrying about how the thread is executed, thus decoupling task submission and execution.

This article will explain the thread pool from the perspective of what, why, and how:

  1. What is a thread pool?
  2. Why use thread pool?
  3. How to use thread pool

Thread Pool

Thread pool is a pooling technology, similar to database connection pool, HTTP connection pool and so on.

The idea of ​​pooling is mainly to reduce the consumption of each acquisition and termination of resources and improve the utilization of resources.

For example, in some remote areas where it is inconvenient to fetch water, people will fetch water from time to time and store it in a pool, so that they can just get it when they need it.

The same is true for the thread pool. Because creating and destroying threads each time takes up too many system resources, we build such a pool to manage threads in a unified manner. You can take threads from the pool when you need them, and put them back when you don't need them. You don't have to destroy them. Isn't it much more convenient?

The thread pool in Java is implemented by the juc package, java.util.concurrent, and the most important one is the ThreadPoolExecutor class. We will talk about how to use it in detail later.

Benefits of thread pools

In the first article on multithreading, we said that the process will apply for resources and use them for threads, so threads take up a lot of system resources. Then we can solve this resource management problem well by using a thread pool to uniformly manage threads.

For example, since there is no need to create or destroy threads, I can just take it when I need it and put it back after use, which saves a lot of resource overhead and can increase the system's operating speed.

Unified management and scheduling can reasonably allocate internal resources and adjust the number of threads according to the current situation of the system.

In summary, there are three benefits:

  1. Reduce resource consumption: Avoid creating and destroying threads multiple times by reusing existing threads to perform tasks.
  2. Improve response speed: Because the step of creating a thread is omitted, execution can start immediately when a task is received.
  3. Provide additional functions: The scalability of the thread pool allows us to add new functions ourselves, such as timing and delaying the execution of certain threads.

Having said so much, we finally get to the point of today. Let's see how to use the thread pool.

Thread pool implementation

Java provides us with the Executor interface to use thread pools.

Executor

There are two major types of thread pools we commonly use:

  • ThreadPoolExecutor
  • ScheduledThreadPoolExecutor

The difference between the two is that the first one is ordinary, and the second one can be executed on a scheduled basis.

Of course, there are other thread pools, such as ForkJoinPool, which appeared in JDK 1.7, which can split large tasks into small tasks for execution and then unify them.

So what process does a task go through after it is submitted to a thread pool?

Execution process

The thread pool actually uses the producer-consumer model internally (it is not clear what this model is, there is a link to the article at the beginning of the article) to decouple threads and tasks, so that the thread pool manages both tasks and threads at the same time.

When a task is submitted to the thread pool, it needs to go through the following process:

Execution process

  1. First, it checks whether the core thread pool is full. The core thread pool is the pool of threads that the thread pool always maintains regardless of the number of users. For example, the total capacity of the thread pool can hold up to 100 threads, and we set the core thread pool to 50, then no matter how many users there are, 50 threads will be kept alive. This number is of course determined according to specific business needs.
  2. The blocking queue is BlockingQueue, which was mentioned in the producer and consumer section.
  3. Finally, to determine whether the thread pool is full, we need to determine whether there are 100 threads instead of 50.
  4. If it is full, so no more threads can be created, it needs to be handled according to the saturation strategy or rejection strategy. We will talk about this saturation strategy later.

ThreadPoolExecutor

We mainly talk about ThreadPoolExecutor, which is the most commonly used thread pool.

ThreadPoolExecutor Structure

Here we can see that there are 4 constructors in this class. If we click in and take a closer look, we can see that the first three actually call the last one, so we only need to look at the last one.

  1. public ThreadPoolExecutor( int corePoolSize,
  2. int maximumPoolSize,
  3. long keepAliveTime,
  4. TimeUnit unit,
  5. BlockingQueue<Runnable> workQueue,
  6. ThreadFactory threadFactory,
  7. RejectedExecutionHandler handler) {
  8. ...
  9. }

Here we take a closer look at these parameters:

1.corePoolSize: This is the size of the core thread pool mentioned above. The threads in the core will never be unemployed.

  • corePoolSize the number of threads to keep in the pool, even if they are idle, unless {@code allowCoreThreadTimeOut} is set

2.maximumPoolSize: The maximum capacity of the thread pool.

  • maximumPoolSize the maximum number of threads to allow in the pool

3. keepAliveTime: survival time. This time refers to how long it takes to destroy these threads after they are idle when the number of threads in the thread pool is greater than the number of core threads.

  • keepAliveTime when the number of threads is greater than the core, this is the maximum time that excess idle threads will wait for new tasks before terminating.

4.unit: corresponds to the time unit of the survival time above.

  • unit the time unit for the {@code keepAliveTime} argument

5.workQueue: This is a blocking queue. In fact, the thread pool is also a kind of producer-consumer model. Tasks are equivalent to producers, and threads are equivalent to consumers. Therefore, this blocking queue is used to coordinate the progress of production and consumption.

  • workQueue the queue to use for holding tasks before they are executed.

6.threadFactory: The engineering mode is used here to create threads.

  • threadFactory the factory to use when the executor creates a new thread

7.handler: This is the rejection strategy.

  • handler the handler to use when execution is blocked because the thread bounds and queue capacities are reached

So we can construct the thread pool by passing in these 7 parameters ourselves. Of course, the thoughtful Java also packages several types of thread pools for us to use conveniently.

  • newCachedThreadPool
  • newFixedThreadPool
  • newSingleThreadExecutor

Let’s look at the meaning and usage of each one in detail.

newCachedThreadPool

  1. public   static ExecutorService newCachedThreadPool() {
  2. return new ThreadPoolExecutor(0, Integer .MAX_VALUE,
  3. 60L, TimeUnit.SECONDS,
  4. new SynchronousQueue<Runnable>());
  5. }

Here we can see that

  • The number of core thread pools is 0, which means it does not retain any threads permanently;
  • The maximum capacity is Integer.MAX_VALUE;
  • The survival time of each thread is 60 seconds, that is, if it is not used for 1 minute, the thread will be recycled;
  • Finally, a synchronous queue is used.

Its applicable scenarios are described in the source code:

  • These pools will typically improve the performance of programs that execute many short-lived asynchronous tasks.

Let’s see how to use it:

  1. public class newCacheThreadPool {
  2.  
  3. public   static void main(String[] args) {
  4. // Create a thread pool
  5. ExecutorService executorService = Executors.newCachedThreadPool();
  6. // Submit tasks to the thread pool
  7. for ( int i = 0; i < 50; i++) {
  8. executorService.execute (new Task()); //Thread pool executes tasks
  9. }
  10. executorService.shutdown();
  11. }
  12. }

Execution Result:

newCached results

It can be clearly seen that threads 1, 2, 3, 5, and 6 are quickly reused.

newFixedThreadPool

  1. public   static ExecutorService newFixedThreadPool( int nThreads) {
  2. return new ThreadPoolExecutor(nThreads, nThreads,
  3. 0L, TimeUnit.MILLISECONDS,
  4. new LinkedBlockingQueue<Runnable>());
  5. }

The characteristics of this thread pool are:

  1. The number of threads in the thread pool is fixed, and is also a parameter we need to enter when creating a thread pool;
  2. Threads exceeding this number need to wait in the queue.

Its applicable scenarios are:

  • Creates a thread pool that reuses a fixed number of threads operating off a shared unbounded queue.
  1. public class FixedThreadPool {
  2. public   static void main(String[] args) {
  3. ExecutorService executorService = Executors.newFixedThreadPool(10);
  4. for ( int i = 0; i < 200; i++) {
  5. executorService.execute (new Task());
  6. }
  7. executorService.shutdown();
  8. }
  9. }

newFixed results

Here I limit the thread pool to a maximum of 10 threads. Even if there are 200 tasks to be executed, only 10 threads 1-10 can run.

newSingleThreadExecutor

  1. public   static ExecutorService newSingleThreadExecutor() {
  2. return new FinalizableDelegatedExecutorService
  3. (new ThreadPoolExecutor(1, 1,
  4. 0L, TimeUnit.MILLISECONDS,
  5. new LinkedBlockingQueue<Runnable>()));
  6. }

As the name suggests, this thread pool only has 1 thread.

Applicable scenarios are:

  • Creates an Executor that uses a single worker thread operating off an unbounded queue.

Let’s take a look at the effect.

  1. public class SingleThreadPool {
  2. public   static void main(String[] args) {
  3. ExecutorService executorService = Executors.newSingleThreadExecutor();
  4. for ( int i = 0; i < 100; i++) {
  5. executorService.execute (new Task());
  6. }
  7. executorService.shutdown();
  8. }
  9. }

newSingle Result

I can clearly feel some lag when the results are displayed here, which is not the case in the first two examples. After all, there is only one thread running here.

summary

So when using the thread pool, the ThreadPoolExecutor class is actually called, but with different parameters passed.

There are two parameters to pay special attention to here:

  • The first is the choice of workQueue, which is the choice of blocking queue. If I want to talk about it, I will write it later when I have the chance.
  • The second is the setting of the handler.

Then we found that in the above three specific thread pools, no handler was set, this is because they all used defaultHandler.

  1. /**
  2. * The default rejected execution handler
  3. */
  4. private static final RejectedExecutionHandler defaultHandler =
  5. new AbortPolicy();

There are 4 rejection strategies in ThreadPoolExecutor, all of which implement RejectedExecutionHandler:

  1. AbortPolicy means rejecting the task and throwing a RejectedExecutionException. I call this a "formal rejection", for example, you have completed the last round of interviews and finally received a rejection letter from HR.
  2. DiscardPolicy rejects the task but does not say anything. This is called "silent rejection", for example, most companies reject resumes silently.
  3. DiscardOldestPolicy, as the name suggests, discards old tasks and executes new tasks.
  4. CallerRunsPolicy directly calls the thread to handle the task, which is VIP.

Therefore, the default strategy used by these three thread pools is the first one, which is a blatant rejection.

Well, that's all for this article. Of course, there are many more knowledge points about thread pools, such as execute() and submit() methods, the life cycle of thread pools, and so on.

But as the number of readers gradually decreased, Sister Qi realized that there seemed to be some misunderstanding, so this article is the last one in the multi-threaded series.

This article has been included in my Github: https://github.com/xiaoqi6666/NYCSDE, click to read the original text directly. This Github summarizes all my articles and materials, and will continue to update and maintain them. I also hope that everyone can help click on the Star. Your support and recognition is the greatest motivation for my creation!

This article is reprinted from the WeChat public account "Ma Nong Tian Xiao Qi", which can be followed through the following QR code. To reprint this article, please contact the WeChat public account "Ma Nong Tian Xiao Qi".

<<:  6 steps to effective real-time monitoring across hybrid IT

>>:  "6G Wireless Hotspot Technology Research White Paper" (2020) released

Recommend

On the improvement skills of data center operation and maintenance

The stable operation of a data center is insepara...

BuyVM Las Vegas VPS simple test

BuyVM has been shared many times in the blog. It ...

Wireless AP Capacity and Network Bandwidth Calculation Method

Wireless AP is the access point for users to ente...

10 good ways to optimize mobile pages for developers

[51CTO.com original article] The rapid developmen...

SD-WAN in 2019: A conundrum for service providers

It’s that time of year again when analysts and ex...

5G needs new Wi-Fi tech to succeed, Cisco says

As the tech industry talks up 5G networks, Cisco ...

Software-defined architecture enables network optimization for cloud access

Everyone is talking about the huge changes that c...

Recommend a lightweight and fast file transfer tool for LAN

Project Introduction Fluxy is designed to provide...

5G will revolutionize the Internet of Things, but not soon

5G technology is the most anticipated network upd...

Authoritative release: Ten major events in China's industrial Internet in 2020

In order to comprehensively display the developme...

What does the request data packet go through from sending to receiving?

Previously, we talked about how the domain name i...

DMIT: $36.9/year-1GB/10G SSD/450GB@500Mbps/Los Angeles CN2 GIA

DMIT has released the latest special package for ...