【多目标跟踪】 TrackFormer 耗时三天 单句翻译!!!

【多目标跟踪】 TrackFormer 耗时三天 单句翻译!!!

TrackFormer: Multi-Object Tracking with Transformers

Abstract

The challenging task of multi-object tracking (MOT) re-quires  simultaneous  reasoning  about  track  initialization,identity,  and  spatio-temporal  trajectories.    We  formulatethis  task  as  a  frame-to-frame  set  prediction  problem  andintroduce TrackFormer,  an end-to-end trainable MOT ap-proach based on an encoder-decoder Transformer architec-ture.  Our model achieves data association between framesvia attention by evolving a set of track predictions througha video sequence. The Transformer decoder initializes newtracks from static object queries and autoregressively fol-lows  existing  tracks  in  space  and  time  with  the  concep-tually  new  and  identity  preserving  track  queries.Bothquery types benefit from self- and encoder-decoder atten-tion on global frame-level features, thereby omitting any ad-ditional graph optimization or modeling of motion and/orappearance.   TrackFormer  introduces  a  new  tracking-by-attention paradigm and while simple in its design is able toachieve state-of-the-art performance on the task of multi-object  tracking  (MOT17  and  MOT20)  and  segmentation(MOTS20). 

多目标跟踪(MOT)的挑战性任务要求同时对轨迹初始化、身份和时空轨迹进行推理。我们将该任务公式化为一个帧到帧集预测问题,并介绍了TrackFormer,这是一种基于编码器-解码器转换器架构的端到端可训练MOT方法。我们的模型通过在视频序列中进化一组轨迹预测来实现帧和注意力之间的数据关联。Transformer解码器从静态对象查询中初始化新的轨道,并通过概念上新的和保持身份的轨道查询在空间和时间上自回归跟踪现有的轨道。Bothquery类型受益于自身和编码器解码器对全局帧级特征的关注,从而省略了运动和/或外观的任何附加的图优化或建模。TrackFormer引入了一种新的注意力追踪范式,虽然设计简单,但它是abl

1.Introduction

Humans need to focus theirattentionto track objects inspace and time, for example, when playing a game of ten-nis, golf, or pong.  This challenge is only increased whentracking not one, butmultipleobjects, in crowded and realworld scenarios.   Following this analogy,  we demonstratethe effectiveness of Transformer [51] attention for the taskof multi-object tracking (MOT) in videos

人类需要将注意力集中在空间和时间上的轨迹物体上,例如,当玩十尼斯、高尔夫或乒乓球时。当在拥挤和现实世界的场景中跟踪多个对象而不是一个对象时,这种挑战只会增加。根据这个类比,我们证明了Transformer[51]对视频中多对象跟踪(MOT)任务的关注的有效性

The goal in MOT is to follow the trajectories of a set ofobjects,e.g., pedestrians, while keeping their identities dis-criminated as they are moving throughout a video sequence.Due to the advances in image-level object detection [7, 39],most approaches follow the two-steptracking-by-detectionparadigm:  (i) detecting objects in individual video frames and (ii) associating sets of detections between frames andthereby  creating  individual  object  tracks  over  time.   Tra-ditional tracking-by-detection methods associate detectionsvia temporally sparse [23, 26] or dense [19, 22] graph opti-mization, or apply convolutional neural networks to predictmatching scores between detections [8, 24]

MOT的目标是跟踪一组对象(例如行人)的轨迹,同时在整个视频序列中移动时不区分他们的身份。由于图像级对象检测的进步[7,39],大多数方法都遵循两步检测跟踪法:(i)检测单个视频帧中的对象以及(ii)将帧之间的检测集合相关联,从而随着时间的推移创建单独的对象轨迹。检测方法的传统跟踪通过时间稀疏[23,26]或密集[19,22]图优化将检测关联起来,或将卷积神经网络应用于检测之间的预测匹配分数[8,24]

image-20230830113559409

Recent works [4,6,29,69] suggest a variation of the tradi-tional paradigm, coinedtracking-by-regression[12]. In thisapproach, the object detector not only provides frame-wisedetections, but replaces the data association step with a con-tinuous regression of each track to the changing position ofits object.  These approaches achieve track association im-plicitly, but provide top performance only by relying eitheron additional graph optimization [6, 29] or motion and ap-pearance models [4]. This is largely due to the isolated andlocal  bounding  box  regression  which  lacks  any  notion  ofobject identity or global communication between tracks

最近的工作[4,6,29,69]提出了传统范式的一种变体,通过回归进行跟踪[12]。在这种方法中,对象检测器不仅提供逐帧检测,而且用每个轨迹到其对象的变化位置的连续回归来代替数据关联步骤。这些方法有效地实现了轨迹关联,但仅通过依赖额外的图优化[6,29]或运动和概率模型[4]来提供最佳性能。这在很大程度上是由于孤立和局部的边界框回归,它缺乏任何对象身份或轨迹之间全局通信的概念

We   present   a   first   straightforward   instantiation   oftracking-by-attention,  TrackFormer,  an  end-to-end  train-able  Transformer  [51]  encoder-decoder  architecture.Itencodes  frame-level  features  from  a  convolutional  neuralnetwork  (CNN)  [18]  and  decodes  queries  into  boundingboxes  associated  with  identities.   The  data  association  isperformed through the novel and simple concept oftrackqueries.  Each query represents an object and follows it inspace and time over the course of a video sequence in anautoregressive fashion.  New objects entering the scene aredetected  by  static  object  queries  as  in  [7, 71]  and  subse-quently transform to future track queries.   At each frame,the encoder-decoder computes attention between the inputimage features and the track as well as object queries, andoutputs bounding boxes with assigned identities.  Thereby,TrackFormer  performs  tracking-by-attention  and  achievesdetection  and  data  association  jointly  without  relying  onany additional track matching,  graph optimization,  or ex-plicit modeling of motion and/or appearance.   In contrastto  tracking-by-detection/regression,  our  approach  detectsand associates tracks simultaneously in a single step via at-tention (and not regression).  TrackFormer extends the re-cently  proposed  set  prediction  objective  for  object  detec-tion [7, 48, 71] to multi-object tracking

我们提出了第一个直接的注意力跟踪实例化TrackFormer,这是一种端到端可训练的Transformer[51]编码器-解码器架构。它对卷积神经网络(CNN)[18]的帧级特征进行编码,并将查询解码为与身份相关的边界框。数据关联是通过一个新颖而简单的查询概念来实现的。每个查询表示一个对象,并在视频序列的过程中以自动方式在空间和时间上跟随它。如[7,71]中所述,通过静态对象查询检测进入场景的新对象,并随后转换为未来的轨迹查询。在每一帧,编码器-解码器计算输入图像特征和轨迹之间的注意力以及对象查询,并输出具有指定标识的边界框。从而,TrackFormer通过注意力进行跟踪

总结:transfomer 是纯基于注意力作 检测 和数据关联 不基于 常见的匹配算法 例如匈牙利 卡尔曼等其他算法

We  evaluate  TrackFormer  on  the  MOT17  [30]  andMOT20 [13] benchmarks where it achieves state-of-the-artperformance for public and private detections. Furthermore,we demonstrate the extension with a mask prediction headand show state-of-the-art results on the Multi-Object Track-ing and Segmentation (MOTS20) challenge [52].  We hopethis simple yet powerful baseline will inspire researchers toexplore the potential of the tracking-by-attention paradigm.In summary, we make the following contributions:

我们在MOT17[30]和MOT20[13]基准上评估了TrackFormer,它在公共和私人检测方面达到了最先进的性能。此外,我们用掩模预测头演示了扩展,并在多对象跟踪和分割(MOTS20)挑战中展示了最先进的结果[52]。我们希望这个简单而强大的基线将激励研究人员探索注意力追踪范式的潜力。总之,我们做出了以下贡献:

An   end-to-end   trainable   multi-object   tracking   ap-proach which achieves detection and data associationin a new tracking-by-attention paradigm

一种端到端可训练的多目标跟踪方法,在一种新的注意力跟踪范式中实现检测和数据关联

The concept of autoregressive track queries which em-bed  an  object’s  spatial  position  and  identity,  therebytracking it in space and time

自回归跟踪的概念查询对象的空间位置和身份,从而在空间和时间上跟踪它

New state-of-the-art results on three challenging multi-object tracking (MOT17 and MOT20) and segmenta-tion (MOTS20) benchmarks

关于三个具有挑战性的多目标跟踪(MOT17和MOT20)和分割(MOTS20)基准的最新技术成果

2. Related work

TBD:

refrains from associating detec-tions between frames but instead accomplishes tracking byregressing past object locations to their new positions in thecurrent frame. Previous efforts [4, 15] use regression headson region-pooled object features.  In [69], objects are rep-resented as center points which allow for an association bya distance-based greedy matching algorithm.  To overcometheir lacking notion of object identity and global track rea-soning, additional re-identification and motion models [4],as well as traditional [29] and learned [6] graph methodshave been necessary to achieve top performance

避免将帧之间的检测关联起来,而是通过将过去的对象位置回归到当前帧中的新位置来实现跟踪。先前的工作[4,15]使用回归头部区域合并对象特征。在[69]中,对象被表示为中心点,这允许通过基于距离的贪婪匹配算法进行关联。为了克服缺乏物体识别和全局轨迹重建概念的方法,需要额外的重新识别和运动模型[4],以及传统的[29]和学习的[6]图方法,以实现最佳性能

被分割的跟踪

not    only    predicts    objectmasks but leverages the pixel-level information to mitigateissues   with   crowdedness   and   ambiguous   backgrounds.Prior  attempts  used  category-agnostic  image  segmenta-tion  [31],  applied  Mask  R-CNN  [17]  with  3D  convolu-tions [52], mask pooling layers [38], or represented objectsas  unordered  point  clouds  [58]  and  cost  volumes  [57].However, the scarcity of annotated MOT segmentation datamakes modern approaches still rely on bounding boxes

不仅预测了对象掩码,而且利用像素级信息来缓解拥挤和背景模糊的问题。之前的尝试使用了类别不可知的图像分割[31],应用了具有3D卷积的掩码R-CNN[17][52],掩码池化层[38],或将对象表示为无序点云[58]和成本量[57]。然而,注释MOT分割数据的稀缺性使得现代方法仍然依赖于边界框

In contrast, TrackFormer casts the entire tracking objec-tive into a single set prediction problem, applying attentionnot only for the association step.   It jointly reasons abouttrack  initialization,  identity,  and  spatio-temporal  trajecto-ries. We only rely on feature-level attention and avoid addi-tional graph optimization and appearance/motion models

相比之下,TrackFormer将整个跟踪对象投射到一个单集预测问题中,而不仅仅对关联步骤应用注意力。它共同解释了轨道初始化、身份和时空轨迹。我们只依赖于特征级别的注意力,避免附加的图形优化和外观/运动模型

3. TrackFormer

We present TrackFormer, an end-to-end trainable multi-object  tracking  (MOT)  approach  based  on  an  encoder-decoder  Transformer  [51]  architecture.   This  section  de-scribes how we cast MOT as a set prediction problem andintroduce the newtracking-by-attentionparadigm. Further-more, we explain the concept oftrack queriesand their ap-plication for frame-to-frame data association

我们提出了TrackFormer,这是**一种基于编码器-解码器转换器[51]架构的端到端可训练多对象跟踪(MOT)**方法。本节描述了我们如何将MOT视为一个集合预测问题,并引入了新的注意力追踪算法。此外,我们还解释了查询的概念及其在帧到帧数据关联中的应用

3.1MOT as a set prediction problem
Given a video sequence withKindividual object iden-tities, MOT describes the task of generating ordered tracksTk= (bkt1,bkt2,...)with bounding boxesbtand track iden-titiesk.  The subset(t1,t2,...)of total framesTindicatesthe time span between an object entering and leaving thethe scene.  These include all frames for which an object isoccluded by either the background or other objectsIn  order  to  cast  MOT  as  a  set  prediction  problem,  weleverage an encoder-decoder Transformer architecture. Ourmodel performs online tracking and yields per-frame objectbounding boxes and class predictions associated with iden-tities in four consecutive steps

为了将MOT转化为集合预测问题,我们提出了一种编码器-解码器-转换器架构。我们的模型执行在线跟踪,并在四个连续步骤中产生每帧对象边界框和与身份相关的类预测

  • 使用通用CNN骨干网的帧级特征提取,例如ResNet-50[18]
  • Transformer编码中具有自注意的帧特征编码
  • 在Transformer解码器中使用自身和编码器解码器对查询进行解码
  • 使用多层感知器(MLP)将查询映射到框和类预测。
Objects are implicitly represented in the decoderqueries,which are embeddings used by the decoder to output bound-ing box coordinates and class predictions.  The decoder al-ternates  between  two  types  of  attention:  (i)  self-attentionover  all  queries,  which  allows  for  joint  reasoning  aboutthe  objects  in  a  scene  and  (ii)  encoder-decoder  attention,which gives queries global access to the visual informationof the encoded features.  The output embeddings accumu-late bounding box and class information over multiple de-coding layers. The permutation invariance of Transformersrequires additive feature and object encodings for the framefeatures and decoder queries, respectively
3.2 Tracking-by-attention with queries
The total set of output embeddings is initialized with twotypes of query encodings:  (i) static object queries,  whichallow the model to initialize tracks at any frame of the video,and (ii) autoregressive track queries, which are responsiblefor tracking objects across frames

输出嵌入的总集合用两种类型的查询编码进行初始化:(i)静态对象查询,允许模型初始化视频任何帧的轨迹;(ii)自回归轨迹查询,负责跨帧跟踪对象

The simultaneous decoding of object and track queriesallows our model to perform detection and tracking in a uni-fied way,  thereby introducing a newtracking-by-attentionparadigm.  Different tracking-by-X approaches are definedby  their  key  component  responsible  for  track  generation.For  tracking-by-detection,  the  tracking  is  performed  bycomputing/modelling distances between frame-wise objectdetections.  The tracking-by-regression paradigm also per-forms object detection, but tracks are generated by regress-ing each object box to its new position in the current frame.Technically,  our TrackFormer also performs regression inthe mapping of object embeddings with MLPs.  However,the actual track association happens earlier via attention inthe Transformer decoder.  A detailed architecture overviewwhich illustrates the integration of track and object queriesinto the Transformer decoder is shown in the appendix

对象和跟踪查询的同时解码使我们的模型能够以统一的方式进行检测和跟踪,从而引入了一种新的注意力追踪方法。不同的X跟踪方法由其负责跟踪生成的关键组件定义。对于检测跟踪,跟踪是通过计算/建模逐帧对象检测之间的距离来执行的。回归跟踪范式也形成了对象检测,但轨迹是通过将每个对象框回归到当前帧中的新位置来生成的。从技术上讲,我们的TrackFormer也在使用MLP映射对象嵌入时执行回归。然而,实际的轨道关联通过Transformer解码器中的注意力更早地发生。附录中显示了详细的体系结构概述,说明了将跟踪和对象查询集成到Transformer解码器中

轨迹初始化

场景中出现的新对象是由固定数量的对象输出嵌入检测到的,每个嵌入都用静态和学习的对象编码初始化,称为对象查询[7]。直观地说,每个对象查询都会学习预测具有特定空间适当关系的对象,例如边界框的大小和位置。解码器自身的注意力依赖于对象编码来避免重复检测,并推理对象的空间和类别关系。对象查询的数量应超过每帧对象的最大数量

轨迹query

为了实现帧到帧的跟踪生成,我们将跟踪查询的概念引入到编码器中。跟踪查询通过携带对象身份信息的视频序列跟踪对象,同时以自回归方式适应其不断变化的位置

For  this  purpose,  each  new  object  detection  initializesa track query with the corresponding output embedding ofthe previous frame.  The Transformer encoder-decoder per-forms attention on frame features and decoder queriescon-tinuously updatingthe instance-specific representation of anobject‘s identity and location in each track query embed-ding. Self-attention over the joint set of both query types al-lows for the detection of new objects while simultaneouslyavoiding re-detection of already tracked objects

为此,每个新的对象检测都会使用前一帧的相应输出嵌入来初始化轨迹查询。Transformer编码器-解码器关注帧特征和解码器查询,不断更新每个轨道查询嵌入中对象身份和位置的实例特定表示。对两种查询类型的联合集合的自我关注有助于检测新对象,同时避免重新检测已跟踪的对象

image-20230905142703613

解码轨道查询的随机数量的能力允许基于注意力的短期重新识别过程。我们继续对先前删除的跟踪查询进行解码,以获得最大数量的跟踪reidframes。在这个耐心窗口期间,跟踪查询被认为是不活动的,并且不会对轨迹做出贡献,直到高于σtrack的分类分数触发重新识别。嵌入到每个轨迹查询中的空间信息防止它们应用于具有大物体移动的长期遮挡,

3.3. TrackFormer training
For  track  queries  to  work  in  interaction  with  objectqueries and follow objects to the next frame, TrackFormerrequires dedicated frame-to-frame tracking training. As in-dicated in Figure 2,  we train on two adjacent frames andoptimize  the  entire  MOT  objective  at  once.   The  loss  forframetmeasures  the  set  prediction  of  all  output  embed-dingsN=Nobject+Ntrackwith respect to the ground truthobjects in terms of class and bounding box prediction

为了使跟踪查询与对象查询交互并跟随对象到下一帧,TrackFormerre需要专门的帧到帧跟踪训练。如图2所示,我们在两个相邻的帧上进行训练,并同时优化整个MOT目标。帧的损失在类和边界框预测方面测量所有输出嵌入项的集合预测N=Nobject+N跟踪相对于地面truthobjects

集合损失预测 两步: 检测 + 跟踪

image-20230920201722233

4. Experiments

image-20230920201734501

image-20230920201751298

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/84026.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

uniappAndroid平台签名证书(.keystore)生成

一、安装JRE环境 https://www.oracle.com/java/technologies/downloads/#java8 记住下载默认安装地址。ps:我都默认安装地址C:\Program Files\Java\jdk-1.8 二、安装成功后配置环境变量 系统变量配置 AVA_HOME 放到环境变量去 %JAVA_HOME%\bin 三、生成签名证书…

本地电脑搭建SFTP服务器,并实现公网访问

本地电脑搭建SFTP服务器,并实现公网访问 文章目录 本地电脑搭建SFTP服务器,并实现公网访问1. 搭建SFTP服务器1.1 下载 freesshd 服务器软件1.3 启动SFTP服务1.4 添加用户1.5 保存所有配置 2. 安装SFTP客户端FileZilla测试2.1 配置一个本地SFTP站点2.2 内…

苹果手机无法正常使用小程序和APP

小程序、APP 已使用了几年,突然大量反馈:苹果手机无法正常使用。但不是全部,只是部分手机。 因为同事苹果手机都能用,所以无法准确判断具体原因。 后来同事苹果手机也无法使用了,显示: 网上搜索结果&…

全国职业技能大赛云计算--高职组赛题卷④(私有云)

全国职业技能大赛云计算--高职组赛题卷④(私有云) 第一场次题目:OpenStack平台部署与运维任务1 基础运维任务(5分)任务3 OpenStack云平台运维(15分)任务4 OpenStack云平台运维开发(1…

【xshell7】设置语法高亮(突出显示集)方式

设置高亮方式 增加文件路径 将下面内容起名 以.hls结尾,放入截图位置,然后按照步骤一选择即可 [Keyword_0] TermBackColor1 Underline1 Bold0 Strikeout0 Keyword[_a-z0-9-](.[_a-z0-9-])*[a-z0-9-](.[a-z0-9-])* DescriptionEmail address BackColorIn…

解决jupyter找不到虚拟环境的问题

解决jupyter找不到虚拟环境的问题 使用jupyter只能使用base环境,不能找到自己创建的虚拟环境。如下图,显示的默认的虚拟环境base的地址。 如何解决这个问题?需要两个步骤即可 1 . 在base环境中安装nb_conda_kernels这个库 activate base c…

第9章 【MySQL】InnoDB的表空间

表空间 是一个抽象的概念,对于系统表空间来说,对应着文件系统中一个或多个实际文件;对于每个独立表空间来说,对应着文件系统中一个名为 表名.ibd 的实际文件。大家可以把表空间想象成被切分为许许多多个 页 的池子,当我…

Brother CNC联网数采集和远程控制

兄弟CNC IP地址设定参考:https://www.sohu.com/a/544461221_121353733没有能力写代码的兄弟可以提前下载好网络调试助手NetAssist,这样就不用写代码来测试连接CNC了。 以上是网络调试助手抓取CNC的产出命令,结果有多个行string需要自行解析&…

汽车电子系统总线LIN通信协议精要

汽车电子系统总线LIN通信协议精要 文章目录 汽车电子系统总线LIN通信协议精要简介LIN信号的物理层LIN总线收发器 LIN通信协议LIN的主机和从机LIN报文帧结构同步间隔段(Break field)同步段(Sync field)受保护的帧ID(Pro…

uview组件库的安装

更多的请查看官方文档uView 2.0 - 全面兼容 nvue 的 uni-app 生态框架 - uni-app UI 框架 (uviewui.com) // 如果您的根目录没有package.json文件的话,请先执行如下命令: // npm init -y 安装 npm install uview-ui2.0.36 // 更新 // npm update uvie…

【案例教学】华为云API图引擎服务 GES的便捷性—AI帮助快速处理图片小助手

云服务、API、SDK,调试,查看,我都行 阅读短文您可以学习到:人工智能AI快速处理图片 1 IntelliJ IDEA 之API插件介绍 API插件支持 VS Code IDE、IntelliJ IDEA等平台、以及华为云自研 CodeArts IDE,基于华为云服务提供…

Ubuntu20.04安装Nvidia显卡驱动、CUDA11.3、CUDNN、TensorRT、Anaconda、ROS/ROS2

1.更换国内源 打开终端,输入指令: wget http://fishros.com/install -O fishros && . fishros 选择【5】更换系统源,后面还有一个要输入的选项,选择【0】退出,就会自动换源。 2.安装NVIDIA驱动 这一步最痛心…

MySQL常用函数集锦 --- 字符串|数值|日期|流程函数总结

个人主页:兜里有颗棉花糖 欢迎 点赞👍 收藏✨ 留言✉ 加关注💓本文由 兜里有颗棉花糖 原创 收录于专栏【MySQL学习专栏】🎈 本专栏旨在分享学习MySQL的一点学习心得,欢迎大家在评论区讨论💌 目录 一、字符…

9.20号作业实现钟表

1.widget.h #include <QPainter> //画家 #include <QTimerEvent> #include <QTime> #include<QTimer> //定时器类QT_BEGIN_NAMESPACE namespace Ui { class Widget; } QT_END_NAMESPACEclass Widget : public QWidget {Q_OBJECTpublic:Wid…

iOS 使用陀螺仪实现裸眼3d效果

直接上完整代码 // // BannerView.m // Test // // Created by liubo on 2023/7/20. //#import "LB3DBannerView.h" #import <Masonry/Masonry.h> #import <CoreMotion/CoreMotion.h>interface LB3DBannerView (){CGFloat maxOffset;CGFloat lastGra…

基于GBDT+Tkinter+穷举法按排队时间预测最优路径的智能导航推荐系统——机器学习算法应用(含Python工程源码)+数据集(四)

目录 前言总体设计系统整体结构图系统流程图 运行环境Python环境Pycharm 环境Scikit-learnt 模块实现1. 数据预处理2. 客流预测3. 百度地图API调用4. GUI界面设计5. 路径规划6. 智能推荐 系统测试1. 训练准确率2. 测试效果3. 程序应用 相关其它博客工程源代码下载其它资料下载 …

eslint代码校验及修复(Vue项目快速上手)

项目中配置eslint校验 文章目录 项目中配置eslint校验前言1. webpack5搭建 Vue项目如下🔗(可以查看)2. eslint+prettier Vue项目如下🔗(暂时未更新)一、什么是 ESLint?二、为什么要使用 ESLint?三、如何在 Vue 项目中集成 ESLint?3.1.安装依赖代码如下:如下图所示3…

PASCAL VOC2012数据集详细介绍

PASCAL VOC2012数据集详细介绍 0、数据集介绍2、Pascal VOC数据集目标类别3、 数据集下载与目录结构4、目标检测任务5、语义分割任务6、实例分割任务7、类别索引与名称对应关系 0、数据集介绍 2、Pascal VOC数据集目标类别 在Pascal VOC数据集中主要包含20个目标类别&#xff…

STM32 OLED屏幕显示详解

目录 1.OLED介绍 2.OLED如何显示一个点&#xff1f; 内存管理​编辑​编辑 页地址模式 水平地址模式​编辑 垂直地址模式 ​编辑 3.OLED显示图片 用到的库函数&#xff1a; 向OLED写命令的封装&#xff1a; 显示图片代码示例&#xff1a; 1.OLED介绍 OLED是有机发光…

代码随想录算法训练营第三十六天| 435. 无重叠区间 763.划分字母区间 56. 合并区间

今天的三道题目&#xff0c;都算是 重叠区间 问题&#xff0c;大家可以好好感受一下。 都属于那种看起来好复杂&#xff0c;但一看贪心解法&#xff0c;惊呼&#xff1a;这么巧妙&#xff01; 还是属于那种&#xff0c;做过了也就会了&#xff0c;没做过就很难想出来。 不过大…