ThreadPoolExecutor @since 1.5 @author Doug Lea

ThreadPoolExecutor
类的设计思路和目的主要是为了提供一种高效、
灵活且可控的方式来管理和复用线程资源,
以便更好地处理并发任务。

以下是源码中的注释说明:


An ExecutorService that executes each submitted task using one of possibly several pooled threads, typically configured using Executors factory methods.

Thread pools address two distinct issues: they generally offer enhanced performance when processing a large volume of asynchronous tasks, due to reduced per-task overhead, and they offer a mechanism for limiting and managing the resources, including threads, utilized when executing a batch of tasks. Each ThreadPoolExecutor also maintains basic statistics, such as the count of completed tasks.

To cater to a broad range of applications, this class provides numerous configurable parameters and extension points. However, developers are encouraged to utilize the more convenient Executors factory methods: Executors.newCachedThreadPool (an unbounded thread pool with automatic thread reclamation), Executors.newFixedThreadPool (a fixed-size thread pool), and Executors.newSingleThreadExecutor (a single background thread), which pre-configure settings for the most prevalent use cases. Otherwise, refer to the following guidelines when manually configuring and tuning this class:

  • Core and Maximum Pool Sizes: A ThreadPoolExecutor automatically adjusts the pool size (see getPoolSize) according to the thresholds set by corePoolSize (see getCorePoolSize) and maximumPoolSize (see getMaximumPoolSize). When a new task is submitted via the execute(Runnable) method, and fewer than corePoolSize threads are active, a new thread is created to handle the request, even if other worker threads are idle. If there are more than corePoolSize but fewer than maximumPoolSize threads running, a new thread will only be created if the queue is full. By setting corePoolSize and maximumPoolSize to the same value, you establish a fixed-size thread pool. By setting maximumPoolSize to an effectively unbounded value such as Integer.MAX_VALUE, you permit the pool to accommodate an arbitrary number of concurrent tasks. Typically, core and maximum pool sizes are set only upon construction, but they can also be altered dynamically using setCorePoolSize and setMaximumPoolSize.

  • On-demand Construction: By default, even core threads are initially created and started only when new tasks arrive, but this behavior can be overridden dynamically using the methods prestartCoreThread or prestartAllCoreThreads. You may wish to prestart threads if you initialize the pool with a non-empty queue.

  • Creating New Threads: New threads are created using a ThreadFactory. If not otherwise specified, Executors.defaultThreadFactory is used, which generates threads all belonging to the same ThreadGroup with the same NORM_PRIORITY priority and non-daemon status. By providing a custom ThreadFactory, you can modify the thread's name, thread group, priority, daemon status, etc. If a ThreadFactory fails to create a thread, returning null from newThread, the executor will continue but may be unable to execute any tasks. Threads should possess the "modifyThread" RuntimePermission. If worker threads or other threads using the pool lack this permission, service may be compromised: configuration changes may not take effect promptly, and a shutting down pool may remain in a state where termination is possible but not finalized.

  • Keep-alive Times: If the pool currently has more than corePoolSize threads, excess threads will be terminated if they have been idle for more than the keepAliveTime (see getKeepAliveTime(TimeUnit)). This provides a means of reducing resource consumption when the pool is not actively utilized. If the pool becomes busier later, new threads will be created. This parameter can also be changed dynamically using the method setKeepAliveTime(long, TimeUnit). Using a value of Long.MAX_VALUE TimeUnit.NANOSECONDS effectively prevents idle threads from ever terminating before shut down. By default, the keep-alive policy applies only when there are more than corePoolSize threads. However, the method allowCoreThreadTimeOut(boolean) can be used to apply this time-out policy to core threads as well, provided that the keepAliveTime value is non-zero.

  • Queuing: Any BlockingQueue can be employed to transfer and hold submitted tasks. The use of this queue interacts with pool sizing:

    • If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing.
    • If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread.
    • If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected.

    There are three general queuing strategies:

    • Direct Handoffs: A good default choice for a work queue is a SynchronousQueue that hands off tasks to threads without otherwise holding them. Here, an attempt to queue a task will fail if no threads are immediately available to run it, so a new thread will be constructed. This policy avoids lockups when handling sets of requests that might have internal dependencies. Direct handoffs generally require unbounded maximumPoolSizes to avoid rejection of new submitted tasks. This in turn admits the possibility of unbounded thread growth when commands continue to arrive on average faster than they can be processed.
    • Unbounded Queues: Using an unbounded queue (for example,a LinkedBlockingQueue without a predefined capacity) will cause new tasks to wait in the queue when all corePoolSize threads are busy. Thus, no more than corePoolSize threads will ever be created. (And the value of the maximumPoolSize therefore doesn't have any effect.) This may be appropriate when each task is completely independent of others, so tasks cannot affect each other's execution; for example, in a web page server. While this style of queuing can be useful in smoothing out transient bursts of requests, it admits the possibility of unbounded work queue growth when commands continue to arrive on average faster than they can be processed.
    • Bounded Queues: A bounded queue (for example, an ArrayBlockingQueue) helps prevent resource exhaustion when used with finite maximumPoolSizes, but can be more difficult to tune and control. Queue sizes and maximum pool sizes may be traded off for each other: Using large queues and small pools minimizes CPU usage, OS resources, and context-switching overhead, but can lead to artificially low throughput. If tasks frequently block (for example, if they are I/O bound), a system may be able to schedule time for more threads than you otherwise allow. Use of small queues generally requires larger pool sizes, which keeps CPUs busier but may encounter unacceptable scheduling overhead, which also decreases throughput.
  • Rejected Tasks: New tasks submitted via the execute(Runnable) method will be rejected when the Executor has been shut down, and also when the Executor uses finite bounds for both maximum threads and work queue capacity, and is saturated. In either case, the execute method invokes the RejectedExecutionHandler.rejectedExecution(Runnable, ThreadPoolExecutor) method of its RejectedExecutionHandler. Four predefined handler policies are provided:

    • In the default ThreadPoolExecutor.AbortPolicy, the handler throws a runtime RejectedExecutionException upon rejection.
    • In ThreadPoolExecutor.CallerRunsPolicy, the thread that invokes execute itself runs the task. This provides a simple feedback control mechanism that will slow down the rate that new tasks are submitted.
    • In ThreadPoolExecutor.DiscardPolicy, a task that cannot be executed is simply dropped.
    • In ThreadPoolExecutor.DiscardOldestPolicy, if the executor is not shut down, the task at the head of the work queue is dropped, and then execution is retried (which can fail again, causing this to be repeated).

    It is possible to define and use other kinds of RejectedExecutionHandler classes. Doing so requires some care, especially when policies are designed to work only under particular capacity or queuing policies.

  • Hook Methods: This class provides protected overridable beforeExecute(Thread, Runnable) and afterExecute(Runnable, Throwable) methods that are called before and after the execution of each task. These can be used to manipulate the execution environment; for example, reinitializing ThreadLocals, gathering statistics, or adding log entries. Additionally, the method terminated can be overridden to perform any special processing that needs to be done once the Executor has fully terminated.

    If hook or callback methods throw exceptions, internal worker threads may in turn fail and abruptly terminate.

  • Queue Maintenance: The method getQueue() allows access to the work queue for purposes of monitoring and debugging. Use of this method for any other purpose is strongly discouraged. Two supplied methods, remove(Runnable) and purge, are available to assist in storage reclamation when large numbers of queued tasks become cancelled.

  • Finalization: A pool that is no longer referenced in a program AND has no remaining threads will be shutdown automatically. If you would like to ensure that unreferenced pools are reclaimed even if users forget to call shutdown, then you must arrange that unused threads eventually die, by setting appropriate keep-alive times, using a lower bound of zero core threads and/or setting allowCoreThreadTimeOut(boolean).


ExecutorService 是一个执行服务,它使用一个或多个池化线程来执行每个提交的任务,通常通过 Executors 工厂方法进行配置。

线程池解决了两个问题:它们通常在处理大量异步任务时提供更高的性能,因为减少了每项任务的开销;它们还提供了一种限制和管理执行一批任务时使用的资源(包括线程)的机制。每个 ThreadPoolExecutor 还维护基本的统计信息,例如完成的任务数量。

为了满足广泛的应用需求,这个类提供了许多可配置的参数和扩展点。然而,开发者被鼓励使用更便捷的 Executors 工厂方法:Executors.newCachedThreadPool(一个无界线程池,具有自动线程回收功能)、Executors.newFixedThreadPool(一个固定大小的线程池)和 Executors.newSingleThreadExecutor(一个单后台线程),这些方法为最常见的用例预配置了设置。否则,在手动配置和调整此类时,请参考以下指南:

  • 核心和最大池大小ThreadPoolExecutor 根据 corePoolSize(参见 getCorePoolSize)和 maximumPoolSize(参见 getMaximumPoolSize)设置的阈值自动调整池大小(参见 getPoolSize)。当通过 execute(Runnable) 方法提交新任务时,如果活跃的线程少于 corePoolSize,则会创建一个新线程来处理请求,即使其他工作线程处于空闲状态。如果运行的线程多于 corePoolSize 但少于 maximumPoolSize,则只有在队列满时才会创建新线程。通过将 corePoolSizemaximumPoolSize 设置为相同的值,您可以建立一个固定大小的线程池。通过将 maximumPoolSize 设置为一个实际上无界值,例如 Integer.MAX_VALUE,您可以允许池容纳任意数量的并发任务。通常,核心和最大池大小仅在构造时设置,但也可以动态地使用 setCorePoolSizesetMaximumPoolSize 进行更改。

  • 按需构建:默认情况下,即使是核心线程也只在新任务到达时创建并启动,但这种行为可以使用方法 prestartCoreThreadprestartAllCoreThreads 动态覆盖。如果您希望在使用非空队列初始化池时预启动线程。

  • 创建新线程:新线程的创建使用 ThreadFactory。如果没有特别指定,将使用 Executors.defaultThreadFactory,它生成属于同一个 ThreadGroup、具有相同 NORM_PRIORITY 优先级和非守护状态的所有线程。通过提供自定义的 ThreadFactory,您可以修改线程的名称、线程组、优先级、守护状态等。如果 ThreadFactory 无法创建线程,并从 newThread 返回 null,执行器将继续运行,但可能无法执行任何任务。线程应具备 "modifyThread" RuntimePermission。如果工作线程或其他使用池的线程缺乏此权限,服务可能会受到影响:配置更改可能不会及时生效,正在关闭的池可能仍处于可能终止但未最终确定的状态。

  • 保持活动时间:如果池中的线程数量超过 corePoolSize,超过的线程将在空闲超过 keepAliveTime(参见 getKeepAliveTime(TimeUnit))后被终止。这提供了一种在池未被积极使用时减少资源消耗的手段。如果池后来变得更忙,将创建新线程。这个参数也可以动态地使用方法 setKeepAliveTime(long, TimeUnit) 进行更改。使用 Long.MAX_VALUE TimeUnit.NANOSECONDS 的值有效地防止了在关闭之前空闲线程的终止。默认情况下,保持活动政策仅适用于超过 corePoolSize 的线程。然而,可以使用方法 allowCoreThreadTimeOut(boolean) 将此超时策略应用于核心线程,前提是 keepAliveTime 值非零。

  • 排队:任何 BlockingQueue 都可以用来传递和保存提交的任务。这个队列的使用与池大小有交互:

    • 如果运行的线程少于 corePoolSize,执行器总是更倾向于添加新线程而不是排队。
    • 如果运行的线程达到或超过 corePoolSize,执行器总是更倾向于排队请求而不是添加新线程。
    • 如果无法排队请求,除非这将超过 maximumPoolSize,否则将创建新线程,在这种情况下,任务将被拒绝。

    有三种一般的排队策略:

    • 直接交接:对于工作队列,一个良好的默认选择是 SynchronousQueue,它将任务直接交接给线程,而不保留它们。在这里,如果没有线程立即可用来运行它,尝试排队任务将失败,因此将构造一个新线程。这种策略避免了处理可能具有内部依赖性的请求集时的死锁。直接交接通常需要无界的 maximumPoolSizes 以避免拒绝新提交的任务。这反过来又可能导致线程无限增长的可能性,当命令平均到达速度比它们能被处理的速度快时。
    • 无界队列:使用无界队列(例如,没有预定义容量的 LinkedBlockingQueue)将导致新任务在所有 corePoolSize 线程都忙时在队列中等待。因此,永远不会创建超过 corePoolSize 的线程。(因此,maximumPoolSize 的值没有任何影响。)当每个任务完全独立于其他任务时,这可能是适当的,所以任务不会影响彼此的执行;例如,在网页服务器中。虽然这种风格的排队可以在平滑处理瞬态请求爆发时很有用,但它承认了在命令平均到达速度比它们能被处理的速度快时,工作队列增长无界的可能性。
    • 有界队列:使用有界队列(例如,ArrayBlockingQueue)与有限的 maximumPoolSizes 一起使用,有助于防止资源耗尽,但可能更难以调整和控制。队列大小和最大池大小可以相互交换:使用大队列和小池可以最小化 CPU 使用率、操作系统资源和上下文切换开销,但可能导致人为的低吞吐量。如果任务经常阻塞(例如,如果它们是 I/O 绑定的),系统可能能够为您允许的线程安排更多时间。使用小队列通常需要更大的池大小,这使得 CPU 更忙,但可能会遇到不可接受的调度开销,这也降低了吞吐量。
  • 拒绝任务:当执行器已关闭,以及当执行器对最大线程数和工作队列容量都有有限的界限,并且已饱和时,通过 execute(Runnable) 方法提交的新任务将被 拒绝。在任何情况下,execute 方法都会调用其 RejectedExecutionHandlerrejectedExecution(Runnable, ThreadPoolExecutor) 方法。提供了四种预定义的处理策略:

    • 在默认的 ThreadPoolExecutor.AbortPolicy 中,处理程序在拒绝时抛出运行时 RejectedExecutionException
    • 在 ThreadPoolExecutor.CallerRunsPolicy 中,调用 execute 的线程自己运行任务。这提供了一个简单的反馈控制机制,可以减慢新任务提交的速率。
    • 在 ThreadPoolExecutor.DiscardPolicy 中,无法执行的任务被简单地丢弃。
    • 在 ThreadPoolExecutor.DiscardOldestPolicy 中,如果执行器没有关闭,队列头部的任务将被丢弃,然后重试执行(这可能会再次失败,导致重复此过程)。

    可以定义和使用其他类型的 RejectedExecutionHandler 类。这样做需要小心,特别是当策略被设计为仅在特定容量或排队策略下工作时。

  • 钩子方法:这个类提供了 protected 可覆盖的 beforeExecute(Thread, Runnable)afterExecute(Runnable, Throwable) 方法,它们在每个任务执行之前和之后被调用。这些可以用来操纵执行环境;例如,重新初始化 ThreadLocals、收集统计数据或添加日志条目。此外,方法 terminated 可以被覆盖,以在执行器完全终止后执行任何特殊处理。

    如果钩子或回调方法抛出异常,内部工作线程可能会失败并突然终止。

  • 队列维护:方法 getQueue() 允许访问工作队列,以便进行监控和调试。强烈不鼓励将此方法用于任何其他目的。提供了两种方法,remove(Runnable)purge,以协助在大量排队任务被取消时进行存储回收。

  • 最终化:在程序中不再引用的池 AND 没有剩余线程时,将自动 shutdown。如果您希望确保即使用户忘记调用 shutdown,未引用的池也能被回收,那么您必须安排未使用的线程最终死亡,通过设置适当的保持活动时间、使用零或更低的核心线程下界和/或设置 allowCoreThreadTimeOut(boolean)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/769954.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

力扣HOT100 - 283. 移动零

解题思路: 双指针 指针 i 用于寻找不为零的位置 指针 j 用于寻找为零的位置 不为零时,自己与自己交换,i 和 j 同时向下一个位置移动 为零时,nums[ i ]与nums[ j ]交换,使零向后移动 class Solution {public void…

总结虚函数表机制——c++多态底层原理

前言: 前几天学了多态。 然后过去几天一直在测试多态的底层与机制。今天将多态的机制以及它的本质分享给受多态性质困扰的友友们。 本节内容只涉及多态的原理, 也就是那张虚表的规则,有点偏向底层。 本节不谈语法!不谈语法&#x…

Qt——智能指针实战

目录 前言正文一、理论介绍1、QPointer2、QScopedPoint3、QSharedPoint4、QWeakPoint 二、实战演练1、QPoint2、QScopedPoint3、QSharedPointa、示例一b、示例二 4、QWeakPoint END、总结的知识与问题 参考 前言 智能指针的使用,对很多程序员来说,都算是…

发布文章积分自动增加

controller ApiOperation(value "添加文章")PostMapping("/addwengzhang")public String addwengzhang(RequestBody WengDto wengDto) {if (wengDto.getContent() null || wengDto.getTitle() null) {return "参数不可为空";}User user user…

【MySQL】InnoDB引擎

逻辑结构 InnoDB存储引擎逻辑结构如图所示: Tablespace:表空间,一个数据库可以对应多个表空间。数据库中的每张表都有一个表空间,用来存放表记录、索引等数据。 Segment:段,表空间中有多个段&#xff0c…

第九届蓝桥杯大赛个人赛省赛(软件类)真题C 语言 A 组-乘积尾零

solution 找末尾0的个数&#xff0c;即找有多少对2和5 >问题等价于寻找所给数据中&#xff0c;有多少个2和5的因子&#xff0c;较少出现的因子次数即为0的个数 #include <iostream> using namespace std; int main() {// 请在此输入您的代码printf("31");…

Java代码基础算法练习-搬砖问题-2024.03.25

任务描述&#xff1a; m块砖&#xff0c;n人搬&#xff0c;男搬4&#xff0c;女搬3&#xff0c;两个小孩抬一砖&#xff0c;要求一次全搬完&#xff0c;问男、 女、小孩各若干&#xff1f; 任务要求&#xff1a; 代码示例&#xff1a; package M0317_0331;import java.util.S…

3.C++:类与对象(下)

一、再谈构造函数 1.1构造函数体赋值 在创建对象时&#xff0c;编译器通过调用构造函数&#xff0c;给对象中各个成员变量一个合适的初始值。 class Date { public:Date(int year, int month, int day){_year year;_month month;_day day;}private:int _year;int _month;i…

== 和 equals 的区别是什么?

和 equals() 在 Java 中都是用于比较两个对象&#xff0c;但它们之间存在显著的差异&#xff1a; 比较的内容&#xff1a; &#xff1a;这是 Java 中的基本比较运算符&#xff0c;对于基本数据类型&#xff08;如 int, char, double 等&#xff09;&#xff0c;它比较的是值&a…

二手车交易网站|基于JSP技术+ Mysql+Java+ B/S结构的二手车交易网站设计与实现(可运行源码+数据库+设计文档)

推荐阅读100套最新项目 最新ssmjava项目文档视频演示可运行源码分享 最新jspjava项目文档视频演示可运行源码分享 最新Spring Boot项目文档视频演示可运行源码分享 2024年56套包含java&#xff0c;ssm&#xff0c;springboot的平台设计与实现项目系统开发资源&#xff08;可…

将若依项目部署上线

1. 购买轻量服务器&#xff0c;新人优惠一年61元&#xff08;有点赚&#xff09;&#xff1b; 2. 在轻量服务器重置密码&#xff0c;再远程连接。 3. 登录宝塔面板&#xff1b; 4. 下载mysql5.7&#xff0c;redis7.2&#xff0c;nginx 5. 在宝塔页面设置数据库密码&#xf…

快速熟悉ElasticSearch的基本概念

1.全文检索 全文检索是通过文本内容进行全面搜索的技术。通过全文检索可以快速地在大量文本数据中查找包含特定关键词或者短语的文档&#xff0c;并且返回相关的搜索结果。 检索和查询的区别 检索没有搜索条件边界&#xff0c;检索的结果取决于相关性&#xff0c;相关性计算…

VUE:内置组件<Teleport>妙用

一、<Teleport>简介 <Teleport>能将其插槽内容渲染到 DOM 中的另一个位置。也就是移动这个dom。 我们可以这么使用它: 将class为boxB的盒子移动到class为boxA的容器中。 <Teleport to".boxA"><div class"boxB"></div> &…

OC高级编程 第3章:Grand Central Dispatch

3.1 Grand Central Dispatch (GCD)概要 3.1.1什么是GCD Grand Central Dispatch&#xff08;GCD&#xff09;是异步执行任务的技术之一。一般将应用中记述线程管理用的代码在系统级中实现。开发者只要定义想执行的任务并追加到Dispatch Queue中&#xff0c;GCD就能生成必要的…

如何使用Docker安装Paperless-ngx系统并实现远程在线搜索查阅文档

文章目录 1. 部署Paperless-ngx2. 本地访问Paperless-ngx3. Linux安装Cpolar4. 配置公网地址5. 远程访问6. 固定Cpolar公网地址7. 固定地址访问 Paperless-ngx是一个开源的文档管理系统&#xff0c;可以将物理文档转换成可搜索的在线档案&#xff0c;从而减少纸张的使用。它内置…

登录注册界面

T1、编程设计理工超市功能菜单并完成注册和登录功能的实现。 显示完菜单后&#xff0c;提示用户输入菜单项序号。当用户输入<注册>和<登录>菜单序号时模拟完成注册和登录功能&#xff0c;最后提示注册/登录成功并显示注册信息/欢迎XXX登录。当用户输入其他菜…

Docker操作基础命令

注意&#xff1a;以下命令在特权模式下进行会更有效&#xff01; 进入特权模式 sudo -ssudo su拉取镜像 sudo docker pull [镜像名] # sudo docker pull baiduxlab/sgx-rust:2004-1.1.3进入容器 端口开启服务&#xff1a; sudo docker start 3df9bf5dbd0c进入容器&#xf…

Open CASCADE学习|将圆转换为NURBS曲线

NURBS曲线&#xff0c;全称非均匀有理B样条曲线&#xff08;Non-Uniform Rational B-Splines&#xff09;&#xff0c;是计算机图形学中用于表示几何形状的数学表示方法。它结合了非均匀B样条&#xff08;B-Splines&#xff09;和有理基函数&#xff08;Rational Basis Functio…

R语言迅速计算多基因评分(PRS)

Polygenic Risk Scores in R 最朴素的理解PRS&#xff1a; GWAS分析结果中&#xff0c;有每个SNP的beta值、se值、P值&#xff0c;因为GWAS分析中将SNP变为0-1-2编码&#xff0c;所以这些显著的SNP的beta值&#xff0c;就可以用于预测。 比如&#xff1a;GWAS分析中&#xf…

SQL语言: 基本操作

DDL(数据定义) 库 创建数据库 CREATE DATABASE database_name; 删除数据库 DROP DATABASE database_name; 选择数据库 USE database_name; 表 创建表格 CREATE TABLE table_name( column1 datatype, column2 datatype, ... ); 删除表格 DROP TABLE table_name; 修改表格ALT…