Flink 中kafka broker缩容导致Task一直重启

背景

Flink版本 1.12.2
Kafka 客户端 2.4.1
在公司的Flink平台运行了一个读Kafka计算DAU的流程序,由于公司Kafka的缩容,直接导致了该程序一直在重启,重启了一个小时都还没恢复(具体的所容操作是下掉了四台kafka broker,而当时flink配置了12台kafka broker),当时具体的现场如下:

JobManaer上的日志如下:
2023-10-07 10:02:52.975 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: TableSourceScan(table=[[default_catalog, default_database, ubt_start, watermark=[-(LOCALTIMESTAMP, 1000:INTERVAL SECOND)]]]) (34/64) (e33d9ad0196a71e8eb551c181eb779b5) switched from RUNNING to FAILED on container_e08_1690538387235_2599_01_000010 @ task-xxxx-shanghai.emr.aliyuncs.com (dataPort=xxxx).
org.apache.flink.streaming.connectors.kafka.internals.Handover$ClosedException: nullat org.apache.flink.streaming.connectors.kafka.internals.Handover.close(Handover.java:177)at org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher.cancel(KafkaFetcher.java:164)at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.cancel(FlinkKafkaConsumerBase.java:945)at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.lambda$createAndStartDiscoveryLoop$2(FlinkKafkaConsumerBase.java:913)at java.lang.Thread.run(Thread.java:750)对应的 TaskManager(task-xxxx-shanghai.emr.aliyuncs.com)上的日志如下:2023-10-07 10:02:24.604 WARN  org.apache.kafka.clients.NetworkClient - [Consumer clientId=xxxx] Connection to node 46129 (sh-bs-b1-303-i14-kafka-129-46.ximalaya.local/192.168.129.46:9092) could not be established. Broker may not be available.2023-10-07 10:02:52.939 WARN  org.apache.flink.runtime.taskmanager.Task - Source: TableSourceScan(t) (34/64)#0 (e33d9ad0196a71e8eb551c181eb779b5) switched from RUNNING to FAILED.
org.apache.flink.streaming.connectors.kafka.internals.Handover$ClosedException: nullat org.apache.flink.streaming.connectors.kafka.internals.Handover.close(Handover.java:177)at org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher.cancel(KafkaFetcher.java:164)at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.cancel(FlinkKafkaConsumerBase.java:945)at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.lambda$createAndStartDiscoveryLoop$2(FlinkKafkaConsumerBase.java:913)at java.lang.Thread.run(Thread.java:750)2023-10-07 10:04:58.205 WARN  org.apache.kafka.clients.NetworkClient - [Consumer clientId=xxx, groupId=xxxx] Connection to node -4 (xxxx:909) could not be established. Broker may not be available.
2023-10-07 10:04:58.205 WARN  org.apache.kafka.clients.NetworkClient - [Consumer clientId=xxx, groupId=xxxx] Bootstrap broker sxxxx:909 (id: -4 rack: null) disconnected
2023-10-07 10:04:58.206 WARN  org.apache.kafka.clients.NetworkClient - [Consumer clientId=xxx, groupId=xxxxu] Connection to node -5 (xxxx:9092) could not be established. Broker may not be available.
2023-10-07 10:04:58.206 WARN  org.apache.kafka.clients.NetworkClient - [Consumer clientId=xxx, groupId=xxxxu] Bootstrap broker xxxx:9092 (id: -5 rack: null) disconnected2023-10-07 10:08:15.541 WARN  org.apache.flink.runtime.taskmanager.Task - Source: TableSourceScan(xxx) switched from RUNNING to FAILED.
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata

当时Flink中kafka source的相关配置如下:

scan.topic-partition-discovery.interval  300000
restart-strategy.type fixed-delay
restart-strategy.fixed-delay.attempts 50000000
jobmanager.execution.failover-strategy region

结论以及解决

目前在kafka 消费端有两个参数default.api.timeout.ms(默认60000),request.timeout.ms(默认30000),这两个参数来控制kakfa的客户端从服务端请求超时,也就是说每次请求的超时时间是30s,超时之后可以再重试,如果在60s内请求没有得到任何回应,则会报TimeOutException,具体的见如下分析,
我们在flink kafka connector中通过设置如下参数来解决:

`properties.default.api.timeout.ms` = '600000',
`properties.request.timeout.ms` = '5000',
// max.block.ms是设置kafka producer的超时
`properties.max.block.ms` = '600000',

分析

在Flink中对于Kafka的Connector的DynamicTableSourceFactoryKafkaDynamicTableFactory,这里我们只讨论kafka作为source的情况,
而该类的方法createDynamicTableSource最终会被调用,至于具体的调用链可以参考Apache Hudi初探(四)(与flink的结合)–Flink Sql中hudi的createDynamicTableSource/createDynamicTableSink/是怎么被调用–只不过把Sink改成Source就可以了,所以最终会到KafkaDynamicSource类:

@Overridepublic ScanRuntimeProvider getScanRuntimeProvider(ScanContext context) {final DeserializationSchema<RowData> keyDeserialization =createDeserialization(context, keyDecodingFormat, keyProjection, keyPrefix);final DeserializationSchema<RowData> valueDeserialization =createDeserialization(context, valueDecodingFormat, valueProjection, null);final TypeInformation<RowData> producedTypeInfo =context.createTypeInformation(producedDataType);final FlinkKafkaConsumer<RowData> kafkaConsumer =createKafkaConsumer(keyDeserialization, valueDeserialization, producedTypeInfo);return SourceFunctionProvider.of(kafkaConsumer, false);}

该类的getScanRuntimeProvider方法会被调用,所有kafka相关的操作都可以追溯到FlinkKafkaConsumer类(继承FlinkKafkaConsumerBase)中,对于该类重点的方法如下:

    @Overridepublic final void initializeState(FunctionInitializationContext context) throws Exception {OperatorStateStore stateStore = context.getOperatorStateStore();this.unionOffsetStates =stateStore.getUnionListState(new ListStateDescriptor<>(OFFSETS_STATE_NAME,createStateSerializer(getRuntimeContext().getExecutionConfig())));... }@Overridepublic void open(Configuration configuration) throws Exception {// determine the offset commit modethis.offsetCommitMode =OffsetCommitModes.fromConfiguration(getIsAutoCommitEnabled(),enableCommitOnCheckpoints,((StreamingRuntimeContext) getRuntimeContext()).isCheckpointingEnabled());// create the partition discovererthis.partitionDiscoverer =createPartitionDiscoverer(topicsDescriptor,getRuntimeContext().getIndexOfThisSubtask(),getRuntimeContext().getNumberOfParallelSubtasks());this.partitionDiscoverer.open();subscribedPartitionsToStartOffsets = new HashMap<>();final List<KafkaTopicPartition> allPartitions = partitionDiscoverer.discoverPartitions();if (restoredState != null) {...} else {// use the partition discoverer to fetch the initial seed partitions,// and set their initial offsets depending on the startup mode.// for SPECIFIC_OFFSETS and TIMESTAMP modes, we set the specific offsets now;// for other modes (EARLIEST, LATEST, and GROUP_OFFSETS), the offset is lazily// determined// when the partition is actually read.switch (startupMode) {。。。default:for (KafkaTopicPartition seedPartition : allPartitions) {subscribedPartitionsToStartOffsets.put(seedPartition, startupMode.getStateSentinel());}}if (!subscribedPartitionsToStartOffsets.isEmpty()) {switch (startupMode) {...case GROUP_OFFSETS:LOG.info("Consumer subtask {} will start reading the following {} partitions from the committed group offsets in Kafka: {}",getRuntimeContext().getIndexOfThisSubtask(),subscribedPartitionsToStartOffsets.size(),subscribedPartitionsToStartOffsets.keySet());}} else {LOG.info("Consumer subtask {} initially has no partitions to read from.",getRuntimeContext().getIndexOfThisSubtask());}}this.deserializer.open(RuntimeContextInitializationContextAdapters.deserializationAdapter(getRuntimeContext(), metricGroup -> metricGroup.addGroup("user")));}@Overridepublic void run(SourceContext<T> sourceContext) throws Exception {if (subscribedPartitionsToStartOffsets == null) {throw new Exception("The partitions were not set for the consumer");}// initialize commit metrics and default offset callback methodthis.successfulCommits =this.getRuntimeContext().getMetricGroup().counter(COMMITS_SUCCEEDED_METRICS_COUNTER);this.failedCommits =this.getRuntimeContext().getMetricGroup().counter(COMMITS_FAILED_METRICS_COUNTER);final int subtaskIndex = this.getRuntimeContext().getIndexOfThisSubtask();this.offsetCommitCallback =new KafkaCommitCallback() {@Overridepublic void onSuccess() {successfulCommits.inc();}@Overridepublic void onException(Throwable cause) {LOG.warn(String.format("Consumer subtask %d failed async Kafka commit.",subtaskIndex),cause);failedCommits.inc();}};// mark the subtask as temporarily idle if there are no initial seed partitions;// once this subtask discovers some partitions and starts collecting records, the subtask's// status will automatically be triggered back to be active.if (subscribedPartitionsToStartOffsets.isEmpty()) {sourceContext.markAsTemporarilyIdle();}LOG.info("Consumer subtask {} creating fetcher with offsets {}.",getRuntimeContext().getIndexOfThisSubtask(),subscribedPartitionsToStartOffsets);// from this point forward://   - 'snapshotState' will draw offsets from the fetcher,//     instead of being built from `subscribedPartitionsToStartOffsets`//   - 'notifyCheckpointComplete' will start to do work (i.e. commit offsets to//     Kafka through the fetcher, if configured to do so)this.kafkaFetcher =createFetcher(sourceContext,subscribedPartitionsToStartOffsets,watermarkStrategy,(StreamingRuntimeContext) getRuntimeContext(),offsetCommitMode,getRuntimeContext().getMetricGroup().addGroup(KAFKA_CONSUMER_METRICS_GROUP),useMetrics);if (!running) {return;}if (discoveryIntervalMillis == PARTITION_DISCOVERY_DISABLED) {kafkaFetcher.runFetchLoop();} else {runWithPartitionDiscovery();}}@Overridepublic final void snapshotState(FunctionSnapshotContext context) throws Exception {...HashMap<KafkaTopicPartition, Long> currentOffsets = fetcher.snapshotCurrentState();if (offsetCommitMode == OffsetCommitMode.ON_CHECKPOINTS) {// the map cannot be asynchronously updated, because only one checkpoint call// can happen// on this function at a time: either snapshotState() or// notifyCheckpointComplete()pendingOffsetsToCommit.put(context.getCheckpointId(), currentOffsets);}for (Map.Entry<KafkaTopicPartition, Long> kafkaTopicPartitionLongEntry :currentOffsets.entrySet()) {unionOffsetStates.add(Tuple2.of(kafkaTopicPartitionLongEntry.getKey(),kafkaTopicPartitionLongEntry.getValue()));}... }}@Overridepublic final void notifyCheckpointComplete(long checkpointId) throws Exception {...fetcher.commitInternalOffsetsToKafka(offsets, offsetCommitCallback);...}

主要是initializeStateopen,run,snapshotState,notifyCheckpointComplete这四个方法,下面带着问题逐一介绍一下:
注意:对于initializeStateopen方法的先后顺序,可以参考StreamTask类,其中如下的调用链:

invoke()||\/
beforeInvoke()||\/
operatorChain.initializeStateAndOpenOperators||\/
FlinkKafkaConsumerBase.initializeState||\/
FlinkKafkaConsumerBase.open

就可以知道 initializeState方法的调用是在open之前的

initializeState方法

这里做的事情就是从持久化的State中恢复kafkaTopicOffset信息,我们这里假设是第一次启动

open方法

  • offsetCommitMode
    offsetCommitMode = OffsetCommitModes.fromConfiguration 这里获取设置的kafka offset的提交模式,这里会综合enable.auto.commit的配置(默认是true),enableCommitOnCheckpoints默认是true,checkpointing设置为true(默认是false),综合以上得到的值为OffsetCommitMode.ON_CHECKPOINTS
  • partitionDiscoverer
    这里主要是进行kafka的topic的分区发现,主要路程是 partitionDiscoverer.discoverPartitions,这里的涉及的流程如下:
    AbstractPartitionDiscoverer.discoverPartitions||\/
    AbstractPartitionDiscoverer.getAllPartitionsForTopics ||\/
    KafkaPartitionDiscoverer.kafkaConsumer.partitionsFor||\/
    KafkaConsumer.partitionsFor(topic, Duration.ofMillis(defaultApiTimeoutMs)) //这里的defaultApiTimeoutMs 来自于*default.api.timeout.ms*||\/
    Fetcher.getTopicMetadata //这里面最后抛出 new TimeoutException("Timeout expired while fetching topic metadata");||\/
    Fetcher.sendMetadataRequest => NetworkClient.leastLoadedNode //这里会随机选择配置的broker的节点||\/
    client.poll(future, timer) => NetworkClient.poll => selector.poll(Utils.min(timeout, metadataTimeout, defaultRequestTimeoutMs)); // 这里的 *defaultRequestTimeoutMs* 来自配置*request.timeout.ms*
    综上所述,discoverPartitions做的就是随机选择配置的broker节点,对每个节点进行请求,request.timeout.ms超时后,再随机选择broker,直至总的时间达到了配置的default.api.timeout.ms,这里默认default.api.timeout.ms 为60秒,request.timeout.ms为30秒
  • subscribedPartitionsToStartOffsets
    根据startupMode模式,默认是StartupMode.GROUP_OFFSETS(默认从上次消费的offset开始消费),设置开启的kafka offset,这在kafkaFetcher中会用到

run方法

  • 设置一些指标successfulCommits/failedCommits
  • KafkaFetcher
    这里主要是从kafka获取数据以及如果有分区发现则循环进kafka的topic分区发现,这里会根据配置scan.topic-partition-discovery.interval默认配置为0,实际中我么设置的为300000,即5分钟。该主要的流程为在方法runWithPartitionDiscovery:
      private void runWithPartitionDiscovery() throws Exception {final AtomicReference<Exception> discoveryLoopErrorRef = new AtomicReference<>();createAndStartDiscoveryLoop(discoveryLoopErrorRef);kafkaFetcher.runFetchLoop();// make sure that the partition discoverer is waked up so that// the discoveryLoopThread exitspartitionDiscoverer.wakeup();joinDiscoveryLoopThread();// rethrow any fetcher errorsfinal Exception discoveryLoopError = discoveryLoopErrorRef.get();if (discoveryLoopError != null) {throw new RuntimeException(discoveryLoopError);}}
    • createAndStartDiscoveryLoop 这个会启动单个线程以while sleep方式实现以scan.topic-partition-discovery.interval为间隔来轮询进行Kafka的分区发现,注意这里会吞没Execption,并不会抛出异常

       private void createAndStartDiscoveryLoop(AtomicReference<Exception> discoveryLoopErrorRef) {discoveryLoopThread =new Thread(...while (running) {...try {discoveredPartitions =partitionDiscoverer.discoverPartitions();} catch (AbstractPartitionDiscoverer.WakeupException| AbstractPartitionDiscoverer.ClosedException e) {break;}if (running && !discoveredPartitions.isEmpty()) {kafkaFetcher.addDiscoveredPartitions(discoveredPartitions);}if (running && discoveryIntervalMillis != 0) {try {Thread.sleep(discoveryIntervalMillis);} catch (InterruptedException iex) {break;}}}} catch (Exception e) {discoveryLoopErrorRef.set(e);} finally {// calling cancel will also let the fetcher loop escape// (if not running, cancel() was already called)if (running) {cancel();}}},"Kafka Partition Discovery for "+ getRuntimeContext().getTaskNameWithSubtasks());discoveryLoopThread.start();
      }
      

      这里的kafkaFetcher.addDiscoveredPartitions(discoveredPartitions);subscribedPartitionStates变量会把发现分区信息保存起来,这在kafkaFetcher.runFetchLoop中会设置已经提交的offset信息,并且会在snapshotState会用到

    • kafkaFetcher.runFetchLoop 这里会从kafka拉取数据,并设置kafka的offset,具体的流程如下:

       runFetchLoop ||\/subscribedPartitionStates 这里会获取*subscribedPartitionStates*变量||\/partitionConsumerRecordsHandler||\/emitRecordsWithTimestamps||\/emitRecordsWithTimestamps||\/partitionState.setOffset(offset);
      

      这里的offset就是从消费的kafka记录中获取的

snapshotState方法

这里会对subscribedPartitionStates中的信息进行处理,主要是加到pendingOffsetsToCommit变量中

  • offsetCommitMode
    这里上面说到是OffsetCommitMode.ON_CHECKPOINTS,如果是ON_CHECKPOINTS,则会从fetcher.snapshotCurrentState获取subscribedPartitionStates
    并加到pendingOffsetsToCommit,并持久化到unionOffsetStates中,这实际的kafka offset commit操作在notifyCheckpointComplete中,

notifyCheckpointComplete方法

获取到要提交的kafka offset信息,并持久化保存kafka中

##参考

  • open 和 initailizeState的初始化顺序
  • A single failing Kafka broker may cause jobs to fail indefinitely with TimeoutException: Timeout expired while fetching topic metadata

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/102044.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

LuatOS-SOC接口文档(air780E)-- ftp - ftp 客户端

ftp.login(adapter,ip_addr,port,username,password)# FTP客户端 参数 传入值类型 解释 int 适配器序号, 只能是socket.ETH0, socket.STA, socket.AP,如果不填,会选择平台自带的方式,然后是最后一个注册的适配器 string ip_addr 地址 string port 端口,默认21 string…

工作杂记-YUV的dump和read

工作小记-YUV的dump和read 工作杂记-YUV的dump和read利用dump生成图片 yuv2imgyuv2img代码 工作杂记-YUV的dump和read 工作中涉及到模型验证相关的工作&#xff0c;这里是三个模型的共同作用&#xff0c;在感知模型读取图片的时候&#xff0c;把输入替换成自己给定的输入&…

MySQL索引事务

一、索引 使用一定的数据结构&#xff0c;来保存索引字段对应的数据&#xff0c;以后根据索引字段来检索&#xff0c;就可以提高检索效率。 一定的数据结构-->需要一定的空间来保存 建立索引&#xff1a;类似于建立书籍目录或者手机电话簿 使用索引&#xff1a;查询条件…

HiSilicon352 android9.0 emmc添加新分区

添加新分区 从emmc中单独划分出一个分区&#xff0c;用来存储相关数据&#xff08;可用于存储照片&#xff0c;视频&#xff0c;音乐和文档等&#xff09;或者系统日志log&#xff0c;从而不影响到其他分区。 实现方法&#xff1a; device/hisilicon/Hi3751V350/etc/Hi3751V3…

Spring Boot RESTful API

学习到接口部分了&#xff0c;记录一下 关于restful api感觉这篇文章讲的十分详细且通俗易懂一文搞懂什么是RESTful API - 知乎 (zhihu.com) Spring Boot 提供的 spring-boot-starter-web 组件完全支持开发 RESTful API &#xff0c;提供了 GetMapping&#xff1a;处理get请求…

Java中如何在两个线程间共享数据

Java中如何在两个线程间共享数据 在Java中&#xff0c;在两个线程之间共享数据是常见的需求&#xff0c;但需要小心处理以确保线程安全性。有多种方式可以在两个线程之间共享数据&#xff0c;下面将详细介绍这些方式&#xff0c;以及它们的优缺点。 方式1&#xff1a;共享可变…

VBA入门2——程序结构

VBA基础入门2 VBA 程序结构VBA 程序结构入门&#xff08;认识 VBA 程序骨架&#xff09;循环结构判断结构 VBA 变量的声明和赋值&#xff08;使程序动起来&#xff09;不同变量类型声明语句如何声明多个变量声明变量是必须的嘛&#xff1f;变量赋值 VBA 程序顺序结构&#xff0…

【原创】ubuntu18修改IP地址

打开网络配置文件 sudo vi /etc/network/interfaces结果发现如下内容&#xff1a; # ifupdown has been replaced by netplan(5) on this system. See # /etc/netplan for current configuration. # To re-enable ifupdown on this system, you can run: # sudo apt inst…

3.3 数据定义

思维导图&#xff1a; 前言&#xff1a; **核心概念**&#xff1a; - 关系数据库支持**三级模式结构**&#xff1a;模式、外模式、内模式。 - 这些模式中包括了如&#xff1a;模式、表、视图和索引等基本对象。 - SQL的数据定义功能主要包括了模式定义、表定义、视图和索引的定…

bootz启动 Linux内核过程中涉及的 do_bootm_states 函数

一. bootz启动Linux uboot 启动Linux内核使用bootz命令。当然还有其它的启动命令&#xff0c;例如&#xff0c;bootm命令等等。 本文只分析 bootz命令启动 Linux内核的过程中涉及的几个重要函数。具体分析 do_bootm_states 函数执行过程。 本文继上一篇文章&#xff0c;地址…

Axure常用技巧及问题

以下内容将持续更新 目录 一、技巧1、版本选择2、快捷键3、定制工具栏 二、问题1、无法在浏览器预览2、发布到本地的HTML无法查看 一、技巧 1、版本选择 2、快捷键 3、定制工具栏 上方菜单栏-右键-自定义工具栏 二、问题 1、无法在浏览器预览 需要更改Axure配置 点击发布-…

C#之常见图形文件格式及其特点

部分内容来源于Microsoft相关文档&#xff01; 日常生活中和软件开发中&#xff0c;经常会用到图形文件格式&#xff1a; BMP BMP 是 Windows 用来存储与设备无关的图像和与应用程序无关的图像的标准格式。 给定 BMP 文件的每像素位数&#xff08;1、4、8、15、24、32 或 64…

嵌入式Linux裸机开发(六)EPIT 定时器

系列文章目录 文章目录 系列文章目录前言介绍配置过程 前言 前面学的快崩溃了&#xff0c;这也太底层了&#xff0c;感觉学好至少得坚持一整年&#xff0c;我决定这节先把EPIT学了&#xff0c;下面把常见三种通信大概学一下&#xff0c;直接跳过其他的先学移植了&#xff0c;有…

网页游戏的开发框架

网页游戏开发通常使用不同的开发框架和技术栈&#xff0c;以创建各种类型的游戏&#xff0c;从简单的HTML5游戏到复杂的多人在线游戏&#xff08;MMO&#xff09;等。以下是一些常见的网页游戏开发框架和它们的特点&#xff0c;希望对大家有所帮助。北京木奇移动技术有限公司&a…

Debezium系列之:debezium版本升级到2.4.0及以上版本重大注意事项

Debezium系列之:debezium版本升级到2.4.0及以上版本重大注意事项 一、背景二、建表语句三、插入数据四、Debezium2.3及以下版本处理策略五、Debezium2.4及以上版本处理策略六、变化总结一、背景 使用 bigint.unsigned.handling.mode将连接器配置为precise时,在Debezium2.4版本…

OpenCV7-copyTo截取ROI

OpenCV7-copyTo截取ROI copyTo截取感兴趣区域 copyTo截取感兴趣区域 有时候&#xff0c;我们只对一幅图像中的部分区域感兴趣&#xff0c;而原图像又十分大&#xff0c;如果带着非感兴趣区域一次处理&#xff0c;就会对程序的内存造成负担&#xff0c;因此我们希望从原始图像中…

python安装geopy出现错误

python&#xff1a; 安装geopy出现错误 错误信息&#xff1a; 解决办法&#xff1a;再试一次 居然成功了&#xff0c;就是说&#xff0c;也不知道为什么

生产级Stable Diffusion AI服务部署指南【BentoML】

在本文中&#xff0c;我们将完成 BentoML 和 Diffusers 库之间的集成过程。 通过使用 Stable Diffusion 2.0 作为案例研究&#xff0c;你可以了解如何构建和部署生产就绪的 Stable Diffusion 服务。 推荐&#xff1a;用 NSDT编辑器 快速搭建可编程3D场景 Stable Diffusion 2.0 …

uniapp微信小程序自定义封装分段器。

uniapp微信小程序自定义封装分段器。 话不多说先上效果 这里我用的是cil框架 vue3 下面贴代码 组价代码&#xff1a; <template><view class"page"><viewv-for"(item, index) in navList":key"index"click"changeNav(ind…

docker之Harbor私有仓库

目录 一、什么是Harbor 二、Harbor的特性 三、Harbor的构成 1、六个组件 2、七个容器 四、私有镜像仓库的上传与下载 五、部署docker-compose服务 把项目中的镜像数据进行打包持久数据&#xff0c;如镜像&#xff0c;数据库等在宿主机的/data/目录下&#xff0c; 一、什么…