Flink CDC 使用实践以及遇到的问题

背景

最近公司在做一些业务上的架构调整,有一部分是数据从mysql采集到Starrocks,之前的一套方法是走 debeziumpuslarstarrocks,这一套下来比较需要配置很多东西,而且出现问题以后,需要修改很多配置,而且现阶段问题比较多,且采集的是所有线上的数据库,维护起来很费劲。
于是我们进行了采集的数据流调整,使用 Flink CDC这一套,这一套 是端到端的,且采用配置化的方式,支持schema的变更,无需再多一层中间存储层。

最终配置

关于flink cdc的使用配置可以参考Flink CDC 源码解析–整体流程,我能这里只贴出来我们最终使用的配置:

source:type: mysqlname: Database mysql to Data warehousehostname: xxxxport: 3306username: xxxpassword: xxxtables:   db1.table1server-id: 556401-556500scan.startup.mode: initialscan.snapshot.fetch.size: 8096scan.incremental.snapshot.chunk.size: 16192debezium.max.queue.size: 162580debezium.max.batch.size: 40960debezium.poll.interval.ms: 50scan.only.deserialize.captured.tables.changelog.enabled: truescan.parallel-deserialize-changelog.enabled: trueheartbeat.interval: 5sscan.newly-added-table.enabled: truesink:type: starrocksname: StarRocks Sinkjdbc-url: xxxload-url: xxxusername: xxxpassword: xxxsink.buffer-flush.interval-ms: 5000table.create.properties.replication_num: 3table.create.num-buckets: 3route:- source-table: db1.\.*sink-table: db1.<>replace-symbol: <>description: route all tables to starrrockspipeline:name: Sync mysql Database to StarRocksparallelism: 1schema.change.behavior: EVOLVE

遇到的问题

  1. EventHeaderV4反序列化问题
    报错如下:

      Caused by: io.debezium.DebeziumException: Failed to deserialize data of EventHeaderV4{timestamp=1732257303000, eventType=WRITE_ROWS, serverId=28555270, headerLength=19, dataLength=320, nextPosition=383299196, flags=0}at io.debezium.connector.mysql.MySqlStreamingChangeEventSource.wrap(MySqlStreamingChangeEventSource.java:1718)... 5 moreCaused by: com.github.shyiko.mysql.binlog.event.deserialization.EventDataDeserializationException: Failed to deserialize data of EventHeaderV4{timestamp=1732257303000, eventType=WRITE_ROWS, serverId=28555270, headerLength=19, dataLength=320, nextPosition=383299196, flags=0}at com.github.shyiko.mysql.binlog.event.deserialization.EventDeserializer.deserializeEventData(EventDeserializer.java:358)at com.github.shyiko.mysql.binlog.event.deserialization.EventDeserializer.nextEvent(EventDeserializer.java:252)at io.debezium.connector.mysql.MySqlStreamingChangeEventSource$1.nextEvent(MySqlStreamingChangeEventSource.java:388)at com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:1187)... 3 moreCaused by: java.io.EOFException: Failed to read remaining 28 of 36 bytes from position 258280448. Block length: 183. Initial block length: 316.at com.github.shyiko.mysql.binlog.io.ByteArrayInputStream.fill(ByteArrayInputStream.java:115)at com.github.shyiko.mysql.binlog.io.ByteArrayInputStream.read(ByteArrayInputStream.java:105)at io.debezium.connector.mysql.RowDeserializers.deserializeVarString(RowDeserializers.java:264)at io.debezium.connector.mysql.RowDeserializers$WriteRowsDeserializer.deserializeVarString(RowDeserializers.java:192)at com.github.shyiko.mysql.binlog.event.deserialization.AbstractRowsEventDataDeserializer.deserializeCell(AbstractRowsEventDataDeserializer.java:189)at com.github.shyiko.mysql.binlog.event.deserialization.AbstractRowsEventDataDeserializer.deserializeRow(AbstractRowsEventDataDeserializer.java:143)at com.github.shyiko.mysql.binlog.event.deserialization.WriteRowsEventDataDeserializer.deserializeRows(WriteRowsEventDataDeserializer.java:75)at com.github.shyiko.mysql.binlog.event.deserialization.WriteRowsEventDataDeserializer.deserialize(WriteRowsEventDataDeserializer.java:65)at com.github.shyiko.mysql.binlog.event.deserialization.WriteRowsEventDataDeserializer.deserialize(WriteRowsEventDataDeserializer.java:38)at com.github.shyiko.mysql.binlog.event.deserialization.EventDeserializer.deserializeEventData(EventDeserializer.java:352)... 6 more
    

    过段时间自己恢复
    这个现象比较诡异,过段时间就自己恢复了,目前怀疑的点:

    • mysql连接数和带宽问题
    • msyql服务端的配置问题,可以参考Flink CDC FAQ
      mysql> set global slave_net_timeout = 120; 
      mysql> set global thread_pool_idle_timeout = 120;
      
    • 作业反压导致,参考阿里云Flink
      execution.checkpointing.interval=10min
      execution.checkpointing.tolerable-failed-checkpoints=100
      debezium.connect.keep.alive.interval.ms = 40000
      
  2. Starrocks Be 内存受限

       java.lang.RuntimeException: com.starrocks.data.load.stream.exception.StreamLoadFailException: Transaction prepare failed, db: shoufuyou_fund, table: fund_common_repay_push, label: flink-4c6c8cfb-5116-4c38-a60e-a1b87cd6f2f2, responseBody: {"Status": "MEM_LIMIT_EXCEEDED","Message": "Memory of process exceed limit. QUERY Backend: 172.17.172.251, fragment: 9048ed6e-6ffb-04db-081b-a4966b179387 Used: 26469550752, Limit: 26316804096. Mem usage has exceed the limit of BE"}errorLog: nullat com.starrocks.data.load.stream.v2.StreamLoadManagerV2.AssertNotException(StreamLoadManagerV2.java:427)at com.starrocks.data.load.stream.v2.StreamLoadManagerV2.write(StreamLoadManagerV2.java:252)at com.starrocks.connector.flink.table.sink.v2.StarRocksWriter.write(StarRocksWriter.java:143)at org.apache.flink.streaming.runtime.operators.sink.SinkWriterOperator.processElement(SinkWriterOperator.java:182)at org.apache.flink.cdc.runtime.operators.sink.DataSinkWriterOperator.processElement(DataSinkWriterOperator.java:178)at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:75)at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:50)at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)at org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:245)at org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:217)at org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:169)at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:68)at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:616)at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:1071)at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:1020)at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:959)at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:938)at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:751)at org.apache.flink.runtime.taskmanager.Task.run(Task.java:567)at java.lang.Thread.run(Thread.java:879)Caused by: com.starrocks.data.load.stream.exception.StreamLoadFailException: Transaction prepare failed, db: shoufuyou_fund, table: fund_common_repay_push, label: flink-4c6c8cfb-5116-4c38-a60e-a1b87cd6f2f2, responseBody: {"Status": "MEM_LIMIT_EXCEEDED","Message": "Memory of process exceed limit. QUERY Backend: 172.17.172.251, fragment: 9048ed6e-6ffb-04db-081b-a4966b179387 Used: 26469550752, Limit: 26316804096. Mem usage has exceed the limit of BE"}errorLog: nullat com.starrocks.data.load.stream.TransactionStreamLoader.prepare(TransactionStreamLoader.java:221)at com.starrocks.data.load.stream.v2.TransactionTableRegion.commit(TransactionTableRegion.java:247)at com.starrocks.data.load.stream.v2.StreamLoadManagerV2.lambda$init$0(StreamLoadManagerV2.java:210)... 1 more
    

    由于我们 Starrocks BE的内存是在 32GB,开启多个Flink CDC 任务,会导致CDC初始化的时候,写入BE的数据太多,从而BE内存不够,
    解决: 降低 写入Starrocks的并行读,不要太多CDC同时并行
    也可以参考Troubleshooting StarRocks memory hog issues

  3. JobManager Direct buffer memory不够

      java. lang.OutOfMemoryError: Direct buffer memoryat lava.n10.B1ts.reserveMemory(B1ts.lava:/08 ~ 7:1.8.0 312.at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) ~[7:1.8.0_372]at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) ~[7:1.8.0_3721 at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:247) ~[7:1.8.0_372]at sun.nio.ch.IOUtil.write(IOUtil.java:60) ~[7:1.8.0_372]at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:234) ~[?:1.8.0_372]at java.nio.channels.Channels.writeFullyImpl(Channels.java:78) ~[?:1.8.0_372]at java.nio.channels.Channels$1.write(Channels.java:174) ~[7:1.8.0_372]at org.apache.flink.core.fs.OffsetAware0utputStream.write(0ffsetAware0utputStream.java:48) ~[ververica-connector-vvp-1.17-vvr-8.0.9-2-SNAPSHOT-jar-with-dependencies.jar:1.17-vvr-8.0.9-2-SNAPSHOT]at org.apache.flink.core.fs.RefCountedFileWithStream.write(RefCountedFileWithStream.java:54) ~[ververica-connector-vvp-1.17-vvr-8.0.9-2-SNAPSHOT-jar-with-dependencies. jar: 1.17-vvr-8.0.9-2-SNAPSHOTat org.apache.flink.core.fs.RefCountedBufferingFileStream.write(RefCountedBufferingFileStream.java:88) ~[ververica-connector-vvp-1.17-vvr-8.0.9-2-SNAPSHOT-jar-with-dependencies.jar:1.17-vvr-8.0.9-2-SNAPSHOT]at ora.aoache.flink.fs.osshadooo.writer.OSSRecoverableFsDataOutoutStream.write OSSRecoverableFsDataOutoutStream.1ava:130) ~?:?at org.apache.flink. runtime.state.filesystem.FsCheckpointMetadata0utputStream.write(FsCheckpointMetadata0utputStream.java:78) ~[flink-dist-1.17-vvr-8.0.9-2-SNAPSHOT. jar:1.17-vvr-8.0.9-2-SNAPSHOT]at java.io.Data0utputStream.write(DataOutputStream.java:107) ~[7:1.8.0_372]at java.io.Filter0utputStream.write(FilterOutputStream.java:97) ~[7:1.8.0_372]at org.apache.flink. runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeStreamStateHandle(MetadataV2V3SerializerBase.java:703) ~[flink-dist-1.17-vvr-8.0.9-2-SNAPSHOT. jar:1.17-vvr-8.0.9-2-SNAPSHOT]at org.apache. flink.runtime.checkpoint.metadata.MetadataV3Serializer.serializeStreamStateHandle(MetadataV3Serializer.java:264) ~[flink-dist-1.17-vvr-8.0.9-2-SNAPSHOT.jar:1.17-vvr-8.0.9-2-SNAPSHOT] at org.apache.flink. runtime.checkpoint.metadata.MetadataV3Serializer.serialize0peratorState(MetadataV3Serializer.java:109) ~[flink-dist-1.17-vvr-8.0.9-2-SNAPSHOT.jar:1.17-vvr-8.0.9-2-SNAPSHOT]at org.apache. flink. runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeMetadata(MetadataV2V3SerializerBase.java:153) ~[flink-dist-1.17-vvr-8.0.9-2-SNAPSHOT.jar:1.17-vvr-8.0.9-2-SNAPSHOT]at org.apache.flink. runtime.checkpoint.metadata.MetadataV3Serializer.serialize(MetadataV3Serializer.java:83) ~[flink-dist-1.17-vvr-8.0.9-2-SNAPSHOT.jar:1.17-vvr-8.0.9-2-SNAPSHOT] at org.apache.flink.runtime.checkpoint.metadata.MetadataV4Serializer.serialize(MetadataV4Serializer.java:56) ~[flink-dist-1.17-vvr-8.0.9-2-SNAPSHOT.jar:1.17-vvr-8.0.9-2-SNAPSHOT]|at org.apache. flink. runtime.checkpoint.Checkpoints.storeCheckpointMetadata(Checkpoints. java:102) ~[flink-dist-1.17-vvr-8.0.

    解决:
    增加配置:

      jobmanager.memory.off-heap.size: 512mb
    
  4. TaskManager jvm内存不够

      java.util.concurrent.TimeoutException: Heartbeat of TaskManager with id job-da2375f5-405b-4398-a568-eaba9711576d-taskmanager-1-34 timed out.at org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(JobMaster.java:1714)at org.apache.flink.runtime.heartbeat.DefaultHeartbeatMonitor.run(DefaultHeartbeatMonitor.java:158)at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)at java.util.concurrent.FutureTask.run(FutureTask.java:266)at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRunAsync$4(AkkaRpcActor.java:453)at org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:453)at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:218)at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:84)at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:168)at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)at scala.PartialFunction.applyOrElse(PartialFunction.scala:127)at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126)at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175)at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)at akka.actor.Actor.aroundReceive(Actor.scala:537)at akka.actor.Actor.aroundReceive$(Actor.scala:535)at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)at akka.actor.ActorCell.receiveMessage(ActorCell.scala:579)at akka.actor.ActorCell.invoke(ActorCell.scala:547)at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)at akka.dispatch.Mailbox.run(Mailbox.scala:231)at akka.dispatch.Mailbox.exec(Mailbox.scala:243)at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)

    解决:
    在运行的过程中我们发现TaskManager的 taskmanager.memory.managed.size 内存使用一直为0,这是因为我们这里没有状态的存储,只是ETL,可以参考Flink TaskManager Memory Model
    在这里插入图片描述

    所以增加以下配置

      taskmanager.memory.managed.size: 256mbtaskmanager.memory.process.size: 4096mtable.exec.state.ttl: 1 m
    
  5. 读取mysql数据过慢

      java.lang.RuntimeException: One or more fetchers have encountered exceptionat org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:261) ~[flink-connector-files-1.17-vvr-8.0.9-2-SNAPSHOT.jar:1.17-vvr-8.0.9-2-SNAPSHOT]at org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:185) ~[flink-connector-files-1.17-vvr-8.0.9-2-SNAPSHOT.jar:1.17-vvr-8.0.9-2-SNAPSHOT]at org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:144) ~[flink-connector-files-1.17-vvr-8.0.9-2-SNAPSHOT.jar:1.17-vvr-8.0.9-2-SNAPSHOT]at org.apache.flink.streaming.api.operators.SourceOperator.pollNext(SourceOperator.java:779) ~[flink-dist-1.17-vvr-8.0.9-2-SNAPSHOT.jar:1.17-vvr-8.0.9-2-SNAPSHOT]at org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:457) ~[flink-dist-1.17-vvr-8.0.9-2-SNAPSHOT....Caused by: java.lang.RuntimeException: SplitFetcher thread 0 received unexpected exception while polling the recordsat org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:165) ~[flink-connector-files-1.17-vvr-8.0.9-2-SNAPSHOT.jar:1.17-vvr-8.0.9-2-SNAPSHOT]at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:114) ~[flink-connector-files-1.17-vvr-8.0.9-2-SNAPSHOT.jar:1.17-vvr-8.0.9-2-SNAPSHOT]at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_372]at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_372]at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_372]at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_372]... 1 moreCaused by: java.lang.IllegalStateException: The connector is trying to read binlog starting at Struct{version=1.9.8.Final,connector=mysql,name=mysql_binlog_source,ts_ms=1732052840471,db=,server_id=0,file=mysql-bin.051880,pos=347695811,row=0}, but this is no longer available on the server. Reconfigure the connector to use a snapshot when needed.at org.apache.flink.cdc.connectors.mysql.debezium.task.context.StatefulTaskContext.loadStartingOffsetState(StatefulTaskContext.java:212) ~[?:?]at org.apache.flink.cdc.connectors.mysql.debezium.task.context.StatefulTaskContext.configure(StatefulTaskContext.java:133) ~[?:?]at org.apache.flink.cdc.connectors.mysql.debezium.reader.BinlogSplitReader.submitSplit(BinlogSplitReader.java:105) ~[?:?]at org.apache.flink.cdc.connectors.mysql.debezium.reader.BinlogSplitReader.submitSplit(BinlogSplitReader.java:71) ~[?:?]at org.apache.flink.cdc.connectors.mysql.source.reader.MySqlSplitReader.pollSplitRecords(MySqlSplitReader.java:119) ~[?:?]at org.apache.flink.cdc.connectors.mysql.source.reader.MySqlSplitReader.fetch(MySqlSplitReader.java:90) ~[?:?]at org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58) ~[flink-connector-files-1.17-vvr-8.0.9-2-SNAPSHOT.jar:1.17-vvr-8.0.9-2-SNAPSHOT]at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:162) ~[flink-connector-files-1.17-vvr-8.0.9-2-SNAPSHOT.jar:1.17-vvr-8.0.9-2-SNAPSHOT]at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:114) ~[flink-connector-files-1.17-vvr-8.0.9-2-SNAPSHOT.jar:1.17-vvr-8.0.9-2-SNAPSHOT]at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_372]at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_372]at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_372]at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_372]... 1 more
    

    解决:
    参考debezium connectors和阿里云,增加如下参数:

    debezium.max.queue.size: 162580
    debezium.max.batch.size: 40960
    debezium.poll.interval.ms: 50
    scan.only.deserialize.captured.tables.changelog.enabled: true
    
  6. 增量读取过慢,导致binlog 已经没了
    参考阿里云,增加如下参数

     scan.parallel-deserialize-changelog.enabled: truescan.parallel-deserialize-changelog.handler.size: 4heartbeat.interval: 5s
    

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/887990.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

配置idea环境进行scala编程

这里用的jdk是jdk-8u161,scala版本是2.12.0 在d盘新建一个本地仓库用来存放下载的maven包&#xff0c;在里面创建如下两个文件 更改settings文件为下面的样子 点击左下角的设置&#xff0c;更改maven本地仓库的位置&#xff08;默认在c盘用户目录下的.m2文件中&#xff0c;更改…

0017. shell命令--tac

目录 17. shell命令--tac 功能说明 语法格式 选项说明 实践操作 注意事项 17. shell命令--tac 功能说明 Linux 的 tac 命令用于按行反向输出文件内容&#xff0c;与 cat 命令的输出顺序相反。非常有趣&#xff0c;好记。也就是说&#xff0c;当我们使用tac命令查看文件内…

mimic插件使用

最近搞机械臂的末端夹具&#xff0c;本来想用吸盘的插件的&#xff0c;不知道为什么吸盘吸不起来可乐瓶&#xff0c;后面就换成夹爪了。 因为原厂的urdf文件中提供夹爪是用mimic标签控制剩下的5个joint关节的&#xff0c;网上参考的资料太少了&#xff0c;也是废了好多力 气&am…

Spring boot之BeanDefinition介绍

在spring框架中IOC容器进行bean的创建和管理。Bean的创建是一个比较复杂的过程&#xff0c;它并不像我们创建对象一样只是直接new一下就行&#xff0c;虽然有些bean确实就是New一下。但在Spring中可以通过一些途径对bean进行增强扩展。在这个过程中&#xff0c;BeanDefinition作…

redis基础spark操作redis

Redis内存淘汰策略 将Redis用作缓存时&#xff0c;如果内存空间用满&#xff0c;就会自动驱逐老的数据。 为什么要使用内存淘汰策略呢&#xff1f; 当海量数据涌入redis&#xff0c;导致redis装不下了咋办&#xff0c;我们需要根据redis的内存淘汰策略&#xff0c;淘汰一些不那…

【MyBatis】验证多级缓存及 Cache Aside 模式的应用

文章目录 前言1. 多级缓存的概念1.1 CPU 多级缓存1.2 MyBatis 多级缓存 2. MyBatis 本地缓存3. MyBatis 全局缓存3.1 MyBatis 全局缓存过期算法3.2 CacheAside 模式 后记MyBatis 提供了缓存切口&#xff0c; 采用 Redis 会引入什么问题&#xff1f;万一遇到需强一致场景&#x…

力扣--LCR 150.彩灯装饰记录II

题目 代码 if(root null){ return new ArrayList<>(); } Queue<TreeNode> queue new LinkedList<>();List<List<Integer>> res new ArrayList<>();queue.add(root);while(!queue.isEmpty()){int k queue.size();List<Integer> …

RFdiffusion Potential类解读

1. Potential 类 功能 Potential 是一个接口类(抽象类),用于定义潜在函数的接口,要求继承它的类必须实现 compute 方法。它的设计遵循 面向对象编程的多态性原则,通过抽象接口确保子类实现特定功能,同时定义了一个通用的 API(即 compute 方法)。源代码: class Poten…

一款.NET开源的Windows资源管理器标签页工具

前言 今天大姚给大家分享一款基于.NET开发的可以让你在Windows资源管理器中使用Tab多标签功能的小工具&#xff1a;QTTabBar。 工具介绍 QTTabBar是一款基于.NET开发的可以让你在Windows资源管理器中使用Tab多标签功能的小工具。从此以后工作时不再遍布文件夹窗口&#xff0c…

PVE中VLAN的设置要点

使用这个拓扑进行连接无法直接访问PVE PVE 是这样设置如下&#xff1a; 核心重点&#xff1a;PVE 的 vmbr0 接口直接绑定了 enp2s0&#xff0c;这会导致 VLAN 流量无法正确处理&#xff0c;因为 PVE 没有专门为 VLAN 3 配置接口。 1.vmbr0 和 vmbr0.3 都是绑定在物理接口 enp2…

长城上,低空下,北京联通点亮5G-A的时代星光

2024年&#xff0c;被认为是5G-A的商用元年。在这个关键节点&#xff0c;大部分普通人最关心的问题可能是5G-A的引入、网络能力的提升&#xff0c;究竟能给我们带来哪些实用价值&#xff1f;在城市中到底有什么应用场景&#xff1f; 面对上述大众关切的问题&#xff0c;首善之都…

Vue使用Mockjs插件实现模拟数据

官方文档&#xff1a;Mock.js 一.引言 在前端开发过程中&#xff0c;我们经常会遇到后端接口尚未完成&#xff0c;但前端需要进行页面构建和功能测试的情况。这时候&#xff0c;Mockjs就如同救星一般出现了。Mockjs 是一款能够模拟生成随机数据&#xff0c;拦截 Ajax 请求并返…

阅读方法论

选择固有缺陷,选项是对比出来的

芯片测试-smith圆图

smith圆图 &#x1f4a2;smith圆图的故事&#x1f4a2;&#x1f4a2;smith圆图中的各部分来历&#x1f4a2;&#x1f4a2;公式推导&#x1f4a2;&#x1f4a2;等电阻圆特点&#x1f4a2;&#x1f4a2;等电抗圆&#x1f4a2;&#x1f4a2;等电抗圆特点&#x1f4a2; &#x1f4a…

聚云科技×亚马逊云科技:打通生成式AI落地最后一公里

云计算时代&#xff0c;MSP&#xff08;云管理服务提供商&#xff09;犹如一个帮助企业上云、用云、管理云的专业管家&#xff0c;在云计算厂商与企业之间扮演桥梁的作用。生成式AI浪潮的到来&#xff0c;也为MSP带来全新的生态价值和发展空间。 作为国内领先的云管理服务提供…

树莓派/Jetson Nano/...aarch64:安装Miniforge 或 Mambaforge

目录 一、下载链接&#xff08;我以miniforge为例&#xff09;二、赋予脚本可执行权限三、运行安装脚本四、添加环境变量 Miniforge 或 Mambaforge是Miniforge 项目提供了针对多种架构&#xff08;包括 aarch64&#xff09;的轻量级 Conda 发行版&#xff0c;它们是 Miniconda …

白嫖域名,无套路,无需手机注册,支持A解析,TXT解析

注册简单&#xff0c;连邮箱都不需要&#xff0c;不用填写任何资料。 支持A、redirectURL、AAAA、TXT等类型的记录&#xff0c;可以创建子域名 注册地址&#xff1a;Free DDNS 打开首页&#xff0c;输入想要的域名&#xff0c;点查询按钮。如果可用&#xff0c;再点击提交按钮…

SQL进阶技巧:非等值连接--单向近距离匹配

目录 0 场景描述 1 数据准备 2 问题分析 ​编辑 ​编辑 3 小结 数字化建设通关指南 0 场景描述 表 t_1 和表 t_2 通过 a 和 b 关联时&#xff0c;有相等的取相等的值匹配&#xff0c;不相等时每一 个 a 的值在 b 中找差值最小的来匹。 表 t_1&#xff1a;a 中无重复值…

【Linux】软件包管理与vim工具使用详解

Linux 软件包管理与vim工具使用详解 什么是软件包Liunx安装软件Linux下载软件的过程&#xff08;Ubuntu、Centos、other&#xff09; centos7配置新的yum源操作系统的好坏评估---生态问题如何安装软件查看软件包卸载软件 Linux编辑器-vim使用简单vim配置Linux编译器-gcc/g使用预…

【Ant Design Pro】1. config 配置

前置说明 这里我使用的是 simple 版本&#xff0c;并结合 antd pro 脚手架搭建&#xff08;现在默认使用为 umi4 版本&#xff09;&#xff1a; 虽然这个文档好像已经好久没有更新了。 config 文件&#xff1a; config.ts // https://umijs.org/config/ import { defineConfi…