Shape-IoU:考虑边框形状与尺度的度量

Abstract

https://arxiv.org/pdf/2312.17663.pdf
作为检测器定位分支的重要组成部分,边界框回归损失在目标检测任务中发挥着重要作用。现有的边界框回归方法通常考虑真实框(GT box)与预测框之间的几何关系,并使用边界框的相对位置和形状来计算损失,而忽略了边界框的固有属性(如形状和尺度)对边界框回归的影响。为了弥补现有研究的不足,本文提出了一种专注于边界框本身形状和尺度的边界框回归方法。首先,我们分析了边界框的回归特性,发现边界框本身的形状和尺度因素会对回归结果产生影响。基于上述结论,我们提出了Shape IoU方法,该方法可以通过关注边界框本身的形状和尺度来计算损失,从而使边界框回归更加准确。最后,我们通过大量对比实验验证了我们的方法,实验结果表明,我们的方法可以有效提高检测性能,并优于现有方法,在不同的检测任务中都取得了最先进的性能。代码可在https://github.com/malagoutou/Shape-IoU获取。

索引术语:目标检测、损失函数和边界框回归

1、简介

目标检测是计算机视觉中的基本任务之一,其目标是在图像中定位和识别物体。根据是否生成锚点,目标检测可以分为基于锚点和无锚点的方法。基于锚点的算法包括Faster R-CNN [1]、YOLO系列(You Only Look Once) [2]、SSD(Single Shot MultiBox Detector) [3]和RetinaNet [4]。无锚点检测算法包括CornerNet [5]、CenterNet [6]和FCOS(Fully Convolutional One Stage Object Detection) [7]。在这些检测器中,边界框回归损失函数作为定位分支的重要组成部分,发挥着不可替代的作用。

目标检测领域最常用的方法包括IoU [8]、GIoU [9]、CIoU [10]、SIoU [11]等。IoU[8]作为目标检测领域应用最广泛的损失函数,具有能够更准确地描述预测框与真实框(GT box)之间匹配程度的优点。其不足主要在于,当两个框之间的重叠部分为0时,无法准确描述预测框与GT框之间的位置关系。GIoU [9]通过引入最小外接框来针对这一不足进行了具体改进。CIoU [10]在考虑预测框与GT框之间归一化距离最小化的基础上,通过增加形状损失项来进一步提高检测精度。在SIoU [11]的研究中,提出将连接预测框和GT框中心点的线的角度大小作为新的损失项来考虑,从而通过角度的变化更准确地判断预测框与GT框之间的匹配程度。

综上所述,以前的边界框回归方法主要是通过在IoU [8]的基础上增加新的几何约束来实现更准确的回归。上述方法考虑了GT框和锚点框的距离、形状和角度对边界框回归的影响,但忽略了边界框本身的形状和尺度也会对边界框回归产生影响的事实。为了进一步提高回归的准确性,我们分析了边界框本身的形状和角度的影响,并提出了新一代的边界回归损失:Shape-IoU。

本文的主要贡献如下:

  • 我们分析了边界框回归的特性,并得出结论:在边界框回归的过程中,边界框回归样本自身的形状和尺度因素会对回归结果产生影响。
  • 基于现有的边界框回归损失函数,考虑到边界框回归样本自身的形状和尺度对边界框回归的影响,我们提出了shape-IoU损失函数;并且针对微小目标检测任务,我们提出了shape-dotdistance和shape-nwd损失。
  • 我们使用最先进的一阶段检测器在不同的检测任务上进行了一系列对比实验,实验结果证明了本文方法的检测效果优于现有方法,达到了最先进水平(sota)。

注:在原文中,“sota”是“state-of-the-art”的缩写,意为“最先进水平”。在描述技术或方法时,它通常用来指代当前领域内性能最好或技术最先进的解决方案。

II. 相关工作
A. 基于IoU的目标检测度量

近年来,随着检测器的发展,边界框回归损失也得到了快速发展。最初,IoU被提出用于评估边界框回归的状态,而GIoU [9]、DIoU [10]、CIoU [10]、EIoU [12]和SIoU [11]等方法则通过在IoU的基础上添加不同的约束来不断更新,以实现更好的检测效果。

  1. IoU度量:IoU(交并比)[8]是最受欢迎的目标检测评估标准,其定义如下:
    I o U = ∣ B ∩ B g t ∣ ∣ B ∪ B g t ∣ (1) I o U=\frac{\left|B \cap B^{g t}\right|}{\left|B \cup B^{g t}\right|} \tag{1} IoU=BBgtBBgt(1)
    其中, B B B B g t B^{g t} Bgt 分别表示预测框和真实框。

  2. GIoU 度量:为了解决边界框回归中,由于真实框(GT 框)与锚框(Anchor 框)不重叠导致的 IoU 损失的梯度消失问题,提出了 GIoU(Generalized Intersection over Union) [9]。其定义如下:
    G I o U = I o U − ∣ C − B ∩ B g t ∣ ∣ C ∣ (2) G I o U=I o U-\frac{\left|C-B \cap B^{g t}\right|}{|C|} \tag{2} GIoU=IoUCCBBgt(2)
    其中,C 表示真实框(GT 框)和锚框(Anchor 框)之间的最小外接框。

  3. DIoU 度量:与 GIoU 相比,DIoU [10] 考虑了边界框之间的距离约束。在 IoU 的基础上增加了质心归一化距离损失项,使回归结果更加准确。其定义如下:
    D I o U = I o U − ρ 2 ( b , b g t ) c 2 D I o U=I o U-\frac{\rho^{2}\left(b, b^{g t}\right)}{c^{2}} DIoU=IoUc2ρ2(b,bgt)

Where b and b^{g t} are the center points of anchor box and GT box respectively, \rho(\cdot) refers to the Euclidean distance, where c is the diagonal distance of the minimum enclosing bounding box between b and b^{g t} .

CIoU [10] further considers the shape similarity between GT and Anchor boxes by adding a new shape loss term to DIoU to reduce the difference in aspect ratio between Anchor and GT boxes. It is defined as follows:

\begin{array}{c}
C I o U=I o U-\frac{\rho^{2}\left(b, b^{g t}\right)}{c^{2}}-\alpha v \
\alpha=\frac{v}{(1-I o U)+v} \
v=\frac{4}{\pi^{2}}\left(\arctan \frac{w^{g t}}{h^{g t}}-\arctan \frac{w}{h}\right)^{2}
\end{array}

where w^{g t} and h^{g t} denote the width and height of GT box, w and h denote the width and height of anchor box.
4) EIoU Metric: EIoU [12] redefines the shape loss based on CloU, and further improves the detection accuracy by directly reducing the aspect difference between GT boxes and Anchor boxes. It is defined as follows:

E I o U=I o U-\frac{\rho^{2}\left(b, b^{g t}\right)}{c{2}}-\frac{\rho{2}\left(w, w^{g t}\right)}{\left(w{c}\right){2}}-\frac{\rho^{2}\left(h, h^{g t}\right)}{\left(h{c}\right){2}}

Where w^{c} and h^{c} are the width and height of the minimum bounding box covering GT box and anchor box.

  1. SIoU Metric: On the basis of previous research, SIoU [11] further considers the influence of the angle between the bounding boxes on the bounding box regression, which aims to accelerate the convergence process by decreasing the angle between the anchor box and GT box which is the horizontal or vertical direction. Its definition is as follows:

\begin{array}{l}
S I o U=I o U-\frac{(\Delta+\Omega)}{2} \
\Lambda=\sin \left(2 \sin ^{-1} \frac{\min \left(\left|x_{c}^{g t}-x_{c}\right|,\left|y_{c}^{g t}-y_{c}\right|\right)}{\sqrt{\left(x_{c}^{g t}-x_{c}\right){2}+\left(y_{c}{g t}-y_{c}\right)^{2}}+\epsilon}\right) \
\Delta=\sum_{t=w, h}\left(1-e^{-\gamma \rho_{t}}\right), \gamma=2-\Lambda \
\left{\begin{array}{l}
\rho_{x}=\left(\frac{x_{c}-x_{c}^{g t}}{w{c}}\right){2} \
\rho_{y}=\left(\frac{y_{c}-y_{c}^{g t}}{h{c}}\right){2}
\end{array}\right. \
\Omega=\sum_{t=w, h}\left(1-e{-\omega_{t}}\right){\theta}, \theta=4 \
\left{\begin{array}{l}
\omega_{w}=\frac{\left|w-w_{g t}\right|}{\max \left(w, w_{g t}\right)} \
\omega_{h}=\frac{\left|h-h_{g t}\right|}{\max \left(h, h_{g t}\right)}
\end{array}\right. \
\end{array}

B. Metric in Tiny Object Detection

IoU-based metrics are suitable for general object detection tasks, and in the case of small object detection, Dot Distance [13] and Normalized Wasserstein Distance (NWD) [14] have been proposed in order to overcome their own sensitivity to with IoU values.

  1. Dot Distance:

\begin{array}{c}
D=\sqrt{\left(x_{c}-x_{c}^{g t}\right){2}+\left(y_{c}-y_{c}{g t}\right)^{2}} \
S=\sqrt{\frac{\sum_{i=1}^{M} \sum_{j=1}^{N_{i}} w_{i j} \cdot h_{i j}}{\sum_{i=1}^{M} N_{i}}} \
\operatorname{Dot} D=e^{-\frac{D}{S}}
\end{array}

Where D denotes the Euclidean distance between the center point of the GT box and the center point of the Anchor box, S refers to the average size of the target in the dataset. M refers to the number of images, N_{i} refers to the number of labeled bounding boxes in the i-th image, and w_{i j} and h_{i j} stand for the width and height of the \mathrm{j} -th border in the \mathrm{i} -th image species, respectively.

  1. Normalized Gaussian Wasserstein Distance:

\begin{array}{c}
D=\sqrt{\left(x_{c}-x_{c}^{g t}\right)^{2}+\left(y_{c}-y_{c}{ }^{g t}\right){2}+\frac{\left(w-w{g t}\right){2}+\left(h-h{g t}\right)^{2}}{\text { weight }^{2}}} \
N W D=e^{-\frac{D}{C}}
\end{array}

where weight =2 and \mathrm{C} is the constant associated with the dataset.

III. Methods
A. Analysis of Bounding Box Regression Characteristics

As shown in Fig.2, the scale of the GT box in bounding box regression sample A and B is the same, while the scale of the GT box in C and D is the same. The shape of the GT box in A and D is the same, while the shape of the GT box in B and \mathrm{C} is the same. The scale of the bounding boxes in C and D is greater than the scale of the bounding boxes in A and B. The regression samples for all bounding boxes in Fig.2a have the same deviation, with a shape deviation of 0 . The difference between Fig.2a and Fig.2b is that the shape-deviation of all bounding box regression samples in Fig.2b is the same, with a deviation of 0 .

The deviation between A and B in Fig.2a is the same, but there is a difference in the IoU value.

The deviation between C and D in Fig.2a is the same, but there is a difference in IoU values, and compared to A and B Fig.2a, the difference in IoU values is not significant.

The shape-deviation of A and B in Fig. 2 b is the same, but there is a difference in the IoU value.

The shape-deviation of C and D in Fig. 2 b is the same, but there is a difference in IoU values, and compared to A and B in Fig.2a, the difference in IoU values is not significant.

The reason for the difference in IoU value between A and B in Fig.2a is that their GT boxes have different shapes, and the deviation direction corresponds to their long and short edge directions, respectively. For A, the deviation along the long edge direction of their GT boxes has a smaller impact on their IoU value, while for \mathrm{B} , the deviation in the short edge direction has a greater impact on their IoU value. Compared to large scale bounding boxes, smaller scale bounding boxes are more sensitive to changes in IoU value, and the shape of GT boxes has a more significant impact on the IoU value of smaller scale bounding boxes. Because A and B are smaller in scale than \mathrm{C} and \mathrm{D} , the change in IoU value is more significant when the shape and deviation are the same. Similarly, in Fig.2b, analyzing bounding box regression from the perspective of shape deviation reveals that the shape of the GT box in the regression sample will affect its IoU value during the regression process.

Based on the above analysis, the following conclusions can be drawn:
-Assuming that the GT box is not a square and has long and short sides, the difference in bounding box shape and scale in the regression sample will lead to differences in its IoU value when the deviation and shape deviation are the same and not all 0.
-For bounding box regression samples of the same scale, when the deviation and shape deviation of the regression sample are the same and not all 0 , the shape of the bounding box will have an impact on the IoU value of the regression sample. The change in IoU value corresponding to the deviation and shape deviation along the short edge direction of the bounding box is more significant.
-For regression samples with the same shape bounding box, when the regression sample deviation and shape deviation are the same and not all 0 , compared to larger scale regression samples, the IoU value of smaller scale bounding box regression samples is more significantly affected by the shape of the GT box

B. Shape-IoU
The formula for shape-iou can be derived from Fig.3:

\begin{array}{c}
I o U=\frac{\left|B \cap B^{g t}\right|}{\left|B \cup B^{g t}\right|} \
w w=\frac{2 \times\left(w^{g t}\right)^{\text {scale }}}{\left(w^{g t}\right)^{\text {scale }}+\left(h^{g t}\right)^{\text {scale }}} \
h h=\frac{2 \times\left(h^{g t}\right)^{\text {scale }}}{\left(w^{g t}\right)^{s c a l e}+\left(h^{g t}\right)^{\text {scale }}} \
\operatorname{distance}^{\text {shape }}=h h \times\left(x_{c}-x_{c}^{g t}\right)^{2} / c^{2}+w w \times\left(y_{c}-y_{c}^{g t}\right)^{2} / c^{2} \
\Omega^{\text {shape }}=\sum_{t=w, h}\left(1-e{-\omega_{t}}\right){\theta}, \theta=4 \
\left{\begin{array}{l}
\omega_{w}=h h \times \frac{\left|w-w^{g t}\right|}{\max \left(w, w^{g t}\right)} \
\omega_{h}=w w \times \frac{\left|h-h^{g t}\right|}{\max \left(h, h^{g t}\right)}
\end{array}\right.
\end{array}

Where scale is the scale factor, which is related to the scale of the target in the dataset, and w w and h h are the weight coefficients in the horizontal and vertical directions respectively, whose values are related to the shape of the GT box. Its corresponding bounding box regression loss is as follows:

L_{\text {Shape-IoU }}=1-\mathrm{IoU}+\text { distance }^{\text {shape }}+0.5 \times \Omega^{\text {shape }}

C. Shape-IoU in Small Target

  1. Shape-Dot Distance: We integrate the idea of ShapeIoU into Dot Distance to obtain Shape-Dot Distance, which is defined as follows:

D=\sqrt{h h \times\left(x_{c}-x_{c}^{g t}\right)^{2}+w w \times\left(y_{c}-y_{c}^{g t}\right)^{2}}

\begin{array}{c}
S=\sqrt{\frac{\sum_{i=1}^{M} \sum_{j=1}^{N_{i}} w_{i j} \times h_{i j}}{\sum_{i=1}^{M} N_{i}}} \
\text { DotD }^{\text {shape }}=e^{-\frac{D}{S}}
\end{array}

  1. Shape-NWD: Similarly, we integrate the idea of ShapeIoU into NWD to obtain Shape-NWD, which is defined as follows:

\begin{array}{c}
B=\frac{\left(w-w_{g t}\right)^{2}+\left(h-h_{g t}\right)^{2}}{w e i g h t^{2}}, \text { weight }=2 \
D=\sqrt{h h \times\left(x_{c}-x_{c}^{g t}\right)^{2}+w w \times\left(y_{c}-y_{c}^{g t}\right)^{2}+B} \
N W D^{\text {shape }}=e^{-\frac{D}{C}}
\end{array}

IV. EXPERIMENTS
A. PASCAL VOC on YOLOv 8 and YOLOv 7

The PASCAL VOC dataset is one of the most popular datasets in the field of object detection, in this article we use the train and val of VOC2007 and VOC2012 as the training set including 16551 images, and the test of VOC2007 as the test set containing 4952 images. In this experiment, we choose the state-of-the-art one-stage detector YOLOv8s and YOLOv7-tiny to perform comparison experiments on the VOC dataset, and SIoU is chosen as the comparison method for the experiments. The experimental results are shown in TABLEI:

B. VisDrone 2019 on Y O L O v 8

VisDrone2019 is the most popular UAV aerial imagery dataset in the field of object detection, which contains a large number of small targets compared to the general dataset. In this experiment, YOLOv8s is chosen as the detector and the comparison method is SIoU. The experimental results are as follows:

C. AI-TOD on YOLOv5

AI-TOD is a remote sensing image dataset, which is different from general datasets in that it contains a significant amount of tiny targets, and the average size of the targets is only 12.8 pixels. In this experiment, YOLOv5s is chosen as the detector, and the comparison method is SIoU. The experimental results are shown in TABLEIII:

V. Conclusion

In this article, we summarize the benefits and disadvantages of existing bounding box regression methods, pointing out that existing research methods focus on considering the geometric constraints between the GT box and the predicted box, while ignoring the influence of geometric factors such as the shape and scale of the bounding box itself on the regression results. Then, by analyzing the regression characteristics of the bounding boxes, we discovered rules in which the geometric factors of the bounding boxes themselves can affect the regression. Based on the above analysis, we propose the Shape-IoU method, which can focus on the shape and scale of the bounding box itself to calculate losses, thereby improving accuracy. Finally, a series of comparative experiments were conducted using the most advanced one-stage detector on datasets of different scales, and the experimental results showed that our method outperformed existing methods and achieved state-ofthe-art performance.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/594861.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

安卓技术栈归纳

1、开发语言 kotlin java (Harmony os) 2、UI开发 xml jetpackCompose Material Design 3、四大组件 Activity Service Broadcast Receiver Content Provider 4、常用组件库Navigation Hilt ViewModel Android KTX LiveData WorkM…

redis复习笔记02(小滴课堂)

分布式缓存Redis6常见核心配置讲解 查看配置文件: 创建配置文件: 配置完我们去验证一下: 启动成功就没有问题了。 可以看到redis日志。 然后我们就可以连接我们的redis了: 设置了密码就需要密码登录了。 如果登录了错误的密码也无…

12月,全国各地电子签推广应用政策汇总

12月,国务院及各地政府办公厅、市监局、住建委等机关部门,持续推动电子印章、电子合同等功能在“政府采购、工程项目审批、企业开办等”领域深化应用,加快实现电子签章互信互认,不断简化办事流程,让越来越多高频常办事…

关于几何建模内核

几何建模内核是用于提供计算机辅助设计 (CAD) 软件中的 3D 建模功能的软件组件。它用于设计虚拟模型以为真实对象的仿真和制造提供支持。几何建模内核使用各种不同的几何表示形式来表示真实对象。这些模型包括使用三角形表面网格粗略估计对象的小平面模型,以及使用在…

tp6数据库查询,模型中使用left join

OrgModel模型(用户所属组织机构表) <?php namespace app\model;use app\BaseModel; use think\Model;class OrgModel extends BaseModel {protected $name t_org;protected $pk org_id; }UserModel模型(用户表) <?php namespace app\model;use app\BaseModel; use …

边坡安全监测预警系统——高效率

安装边坡安全监测预警系统的原因是多方面的&#xff0c;涉及到社会效益、经济效益和环境效益。随着国家基础设施建设的快速发展&#xff0c;边坡安全监测预警系统的需求越来越迫切。 边坡安全监测预警系统对于保障人民生命财产安全具有重要意义。在山区、丘陵地带&#xff0c;边…

第9课 回声抑制(AEC+AGC+ANS)的实现

在第8课中&#xff0c;我们将推流端与播放端合并实现了一对一音视频聊天功能&#xff0c;一切看起来还不错。但在实际使用时&#xff0c;会遇到一个烦心的问题&#xff1a;说话时会听到比较大的回声&#xff0c;影响正常使用。所以&#xff0c;这节课我们来重点解决这个问题。 …

机器学习期末复习

机器学习 选择题名词解释&#xff1a;简答题计算题一、线性回归二、决策树三、贝叶斯 选择题 机器学习利用经验 &#xff0c;须对以下&#xff08;&#xff09;进行分析 A 天气 B 数据 C 生活 D 语言 归纳偏好值指机器学习算法在学习的过程中&#xff0c;对以下&#xff08;&a…

DHTMLX Spreadsheet v5.1.1 Crack

DHTMLX Spreadsheet 5.1 具有新主题、简化的数字格式本地化、与框架的实时集成演示等 推出 DHTMLX Spreadsheet v5.1。新版本提供了一组有用的功能&#xff0c;这对开发人员和最终用户都有吸引力。 首先&#xff0c;新的电子表格版本提供了 4 个内置主题&#xff0c;可以根据您…

数字IC后端设计实现之Innovus update_names和changeInstName的各种应用场景

今天吾爱IC社区小编给大家分享下数字IC后端设计实现innovus中关于update_names和changeInstName在PR中的具体使用方法。 update_names 1&#xff09;为了避免和verilog语法保留的一些关键词&#xff0c;比如input&#xff0c;output这些&#xff0c;是不允许存在叫这类名字的…

VCG 添加自定义属性

文章目录 一、简介二、实现代码三、实现效果参考资料一、简介 VCG Lib提供了一种简单的机制,用于将用户定义的类型“属性”与单纯形和网格相关联。特别要注意的是,“属性”和“组件”基本上都是绑定到简单Mesh结构的附属数据。简而言之,组件是静态定义的成员数据,而属性是运…

STM32CubeMX RS485接口使用

一、基本知识 TTL&#xff08;Transistor-Transistor Logic&#xff09;&#xff1a; 电平范围&#xff1a; 逻辑1对应于2.4V–5V&#xff0c;逻辑0对应于0V–0.5V。通信特点&#xff1a; 全双工。特点&#xff1a; 常见于单片机和微控制器的IO电平&#xff0c;USB转TTL模块通常…

【Qt第三方库】QXlsx库——对 Excel 文件进行相关操作

0 前言 关键词&#xff1a;Qt&#xff1b;Excel&#xff1b;QXlsx&#xff1b;QInt 简介&#xff1a; QXlsx 是第三方开源的库&#xff0c;能够对 Excel 文件进行相关操作&#xff08;读写等&#xff09; 地址&#xff1a; QXlsx官网 QXlsx的Github主页 1 快速上手 对于第一次…

设计模式-流接口模式

设计模式专栏 模式介绍模式特点应用场景流接口模式和工厂模式的区别代码示例Java实现流接口模式Python实现流接口模式 流接口模式在spring中的应用 模式介绍 流接口模式是一种面向对象的编程模式&#xff0c;它可以使代码更具可读性和流畅性。流接口模式的核心思想是采用链式调…

[Unity]实时阴影技术方案总结

一&#xff0c;Planar Shadow 原理就是将模型压扁之后绘制在需要接受阴影的物体上&#xff0c;这种方式十分高效&#xff0c;消耗很低。具体实现过程参考Unity Shader - Planar Shadow - 平面阴影。具按照自己的理解&#xff0c;其实就是根据光照方向计算片元在接受阴影的平面…

odoo 客制化审批流

以BPM、OA为代表的应用平台&#xff0c;低代码处理为前提的审批流功能定制化 功能介绍&#xff1a; 业务对象&#xff1a;针对侵入式注册BPM业务场景&#xff1a;设置审批场景&#xff1a;如&#xff1a;请假大于三天的场景、金额大于1000的场景节点条件&#xff1a; 当符合某…

Spring Cloud Gateway整合Sentinel

日升时奋斗&#xff0c;日落时自省 目录 1、实现整合 1.1、添加框架依赖 1.2、设置配置文件 1.3、设置限流和熔断规则 1.3.1、限流配置 Route ID限流配置 API限流配置 1.3.2、熔断配置 2、实现原理 先前Sentinel针对是业务微服务&#xff0c;没有整合Sentinel到Spring…

uView-UI v2.x常见问题整理

为了更好的给大家提供 uView UI 的技术支持&#xff0c;uView UI 团队整理常见问题文档&#xff0c;大家可以阅读查找常见的问题解决办法。 uView 2.x 文档 https://www.uviewui.com uView 1.x 文档 https://v1.uviewui.com uView UI uni-app 主页 DCloud 插件市场 uVie…

机器视觉系统选型-案例分享

客户要求&#xff1a; 1、测量物体&#xff1a;圆直径&#xff1a;15mm 2、公差带&#xff1a;0.2mm 0.1mm 3、工作距离&#xff1a;50~500mm 4、静态还是动态拍摄 5、视野 测量精度&#xff1a; 1、0.10.20.02mm 公式&#xff1a;机械误差公差带系统精度 2、0.2/100.02mm 公式…

前端发开的性能优化 请求级:请求前(资源预加载和预读取)

预加载 预加载&#xff1a;是优化网页性能的重要技术&#xff0c;其目的就是在页面加载过程中先提前请求和获取相关的资源信息&#xff0c;减少用户的等待时间&#xff0c;提高用户的体验性。预加载的操作可以尝试去解决一些类似于减少首次内容渲染的时间&#xff0c;提升关键资…