Flink从入门到实践(三):数据实时采集 - Flink MySQL CDC

文章目录

  • 系列文章索引
  • 一、概述
    • 1、版本匹配
    • 2、导包
  • 二、编码实现
    • 1、基本使用
    • 2、更多配置
    • 3、自定义序列化器
    • 4、Flink SQL方式
  • 三、踩坑
    • 1、The MySQL server has a timezone offset (0 seconds ahead of UTC) which does not match the configured timezone Asia/Shanghai.
  • 参考资料

系列文章索引

Flink从入门到实践(一):Flink入门、Flink部署
Flink从入门到实践(二):Flink DataStream API
Flink从入门到实践(三):数据实时采集 - Flink MySQL CDC

一、概述

1、版本匹配

注意MySQL的版本,本次是使用MySQL8.0进行演示。
同时,Flink支持很多数据库的cdc。
在这里插入图片描述
同时也要对应好版本,我们本次使用Flink是1.18,同时FlinkCDC也是3.0版本
在这里插入图片描述

2、导包

<dependency><groupId>org.apache.flink</groupId><artifactId>flink-clients</artifactId><version>1.18.0</version>
</dependency>
<dependency><groupId>org.apache.flink</groupId><artifactId>flink-streaming-java</artifactId><version>1.18.0</version>
</dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-connector-base</artifactId><version>1.18.0</version>
</dependency><dependency><groupId>com.ververica</groupId><artifactId>flink-connector-mysql-cdc</artifactId><version>3.0.0</version>
</dependency><dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>8.0.27</version>
</dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-runtime</artifactId><version>1.18.0</version>
</dependency>

二、编码实现

1、基本使用

import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import com.ververica.cdc.debezium.JsonDebeziumDeserializationSchema;
import com.ververica.cdc.connectors.mysql.source.MySqlSource;/*** Flink MySql CDC* 每次启动之后,会将所有数据采集一遍*/
public class FlinkCDC01 {public static void main(String[] args) throws Exception {MySqlSource<String> mySqlSource = MySqlSource.<String>builder().hostname("192.168.56.10").port(3306).databaseList("testdb") // 要监听的数据库,可以填多个,支持正则表达式.tableList("testdb.access") // 监听的表,可以填多个,需要db.表,支持正则表达式.username("root").password("root").deserializer(new JsonDebeziumDeserializationSchema()) // converts SourceRecord to JSON String.build();StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();// 开启检查点env.enableCheckpointing(3000);env.fromSource(mySqlSource, WatermarkStrategy.noWatermarks(), "MySQL Source")// 1个并行任务.setParallelism(1).print().setParallelism(1); // 对接收器使用并行性1来保持消息顺序env.execute("Print MySQL Snapshot + Binlog");}
}

结果是json数据:

{
“before”: null,
“after”: {
“id”: 1,
“name”: “1”
},
“source”: {
“version”: “1.9.7.Final”,
“connector”: “mysql”,
“name”: “mysql_binlog_source”,
“ts_ms”: 1707353812000,
“snapshot”: “false”,
“db”: “testdb”, // 库名
“sequence”: null,
“table”: “access”, // 表名
“server_id”: 1,
“gtid”: null,
“file”: “binlog.000005”,
“pos”: 374,
“row”: 0,
“thread”: 9,
“query”: null
},
“op”: “c”, // 操作 c是create;u是update;d是delete;r是read
“ts_ms”: 1707353812450,
“transaction”: null
}

2、更多配置

https://ververica.github.io/flink-cdc-connectors/master/content/connectors/mysql-cdc%28ZH%29.html

配置选项scan.startup.mode指定 MySQL CDC 使用者的启动模式。有效枚举包括:
initial (默认):在第一次启动时对受监视的数据库表执行初始快照,并继续读取最新的 binlog。
earliest-offset:跳过快照阶段,从可读取的最早 binlog 位点开始读取
latest-offset:首次启动时,从不对受监视的数据库表执行快照, 连接器仅从 binlog 的结尾处开始读取,这意味着连接器只能读取在连接器启动之后的数据更改。
specific-offset:跳过快照阶段,从指定的 binlog 位点开始读取。位点可通过 binlog 文件名和位置指定,或者在 GTID 在集群上启用时通过 GTID 集合指定。
timestamp:跳过快照阶段,从指定的时间戳开始读取 binlog 事件。

3、自定义序列化器

import com.ververica.cdc.debezium.DebeziumDeserializationSchema;
import io.debezium.data.Envelope;
import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.util.Collector;
import org.apache.kafka.connect.data.Field;
import org.apache.kafka.connect.data.Struct;
import org.apache.kafka.connect.source.SourceRecord;import java.util.List;public class DomainDeserializationSchema implements DebeziumDeserializationSchema<String> {@Overridepublic void deserialize(SourceRecord sourceRecord, Collector<String> collector) throws Exception {String topic = sourceRecord.topic();String[] split = topic.split("\\.");System.out.println("数据库:" + split[1]);System.out.println("表:" + split[2]);Struct value = (Struct)sourceRecord.value();// 获取before信息Struct before = value.getStruct("before");System.out.println("before:" + before);if (before != null) {// 所有字段List<Field> fields = before.schema().fields();for (Field field : fields) {System.out.println("before field:" + field.name() + " value:" + before.get(field));}}// 获取after信息Struct after = value.getStruct("after");System.out.println("after:" + after);if (after != null) {// 所有字段List<Field> fields = after.schema().fields();for (Field field : fields) {System.out.println("after field:" + field.name() + " value:" + after.get(field));}}// 操作类型Envelope.Operation operation = Envelope.operationFor(sourceRecord);System.out.println("操作:" + operation);// 收集序列化后的结果collector.collect("aaaaaaaaaaaaa");}@Overridepublic TypeInformation<String> getProducedType() {return BasicTypeInfo.STRING_TYPE_INFO; // 类型}
}
MySqlSource<String> mySqlSource = MySqlSource.<String>builder().hostname("192.168.56.10").port(3306).databaseList("testdb") // 要监听的数据库,可以填多个.tableList("testdb.access") // 监听的表,可以填多个.username("root").password("root").deserializer(new DomainDeserializationSchema()) // 序列化器.build();

4、Flink SQL方式

CDC用的少,还是StreamAPI用的多。

三、踩坑

1、The MySQL server has a timezone offset (0 seconds ahead of UTC) which does not match the configured timezone Asia/Shanghai.

2024-02-08 08:36:33 INFO 5217 — [lt-dispatcher-6] o.a.f.r.executiongraph.ExecutionGraph : Source: MySQL Source -> Sink: Print to Std. Out (1/1) (e2371dabd0c952a5dfa7c053cbde80c3_cbc357ccb763df2852fee8c4fc7d55f2_0_2) switched from CREATED to SCHEDULED.
2024-02-08 08:36:33 INFO 5217 — [lt-dispatcher-8] o.a.f.r.r.s.FineGrainedSlotManager : Received resource requirements from job 369b1c979674a0444f679dd13264ea88: [ResourceRequirement{resourceProfile=ResourceProfile{UNKNOWN}, numberOfRequiredSlots=1}]
2024-02-08 08:36:33 INFO 5218 — [lt-dispatcher-6] o.a.flink.runtime.jobmaster.JobMaster : Trying to recover from a global failure.
org.apache.flink.util.FlinkException: Global failure triggered by OperatorCoordinator for ‘Source: MySQL Source -> Sink: Print to Std. Out’ (operator cbc357ccb763df2852fee8c4fc7d55f2).
at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder L a z y I n i t i a l i z e d C o o r d i n a t o r C o n t e x t . f a i l J o b ( O p e r a t o r C o o r d i n a t o r H o l d e r . j a v a : 624 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . o p e r a t o r s . c o o r d i n a t i o n . R e c r e a t e O n R e s e t O p e r a t o r C o o r d i n a t o r LazyInitializedCoordinatorContext.failJob(OperatorCoordinatorHolder.java:624) at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator LazyInitializedCoordinatorContext.failJob(OperatorCoordinatorHolder.java:624)atorg.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinatorQuiesceableContext.failJob(RecreateOnResetOperatorCoordinator.java:248)
at org.apache.flink.runtime.source.coordinator.SourceCoordinatorContext.failJob(SourceCoordinatorContext.java:395)
at org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:225)
at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator D e f e r r a b l e C o o r d i n a t o r . r e s e t A n d S t a r t ( R e c r e a t e O n R e s e t O p e r a t o r C o o r d i n a t o r . j a v a : 416 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . o p e r a t o r s . c o o r d i n a t i o n . R e c r e a t e O n R e s e t O p e r a t o r C o o r d i n a t o r . l a m b d a DeferrableCoordinator.resetAndStart(RecreateOnResetOperatorCoordinator.java:416) at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.lambda DeferrableCoordinator.resetAndStart(RecreateOnResetOperatorCoordinator.java:416)atorg.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.lambdaresetToCheckpoint 7 ( R e c r e a t e O n R e s e t O p e r a t o r C o o r d i n a t o r . j a v a : 156 ) a t j a v a . u t i l . c o n c u r r e n t . C o m p l e t a b l e F u t u r e . u n i W h e n C o m p l e t e ( C o m p l e t a b l e F u t u r e . j a v a : 774 ) a t j a v a . u t i l . c o n c u r r e n t . C o m p l e t a b l e F u t u r e . u n i W h e n C o m p l e t e S t a g e ( C o m p l e t a b l e F u t u r e . j a v a : 792 ) a t j a v a . u t i l . c o n c u r r e n t . C o m p l e t a b l e F u t u r e . w h e n C o m p l e t e ( C o m p l e t a b l e F u t u r e . j a v a : 2153 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . o p e r a t o r s . c o o r d i n a t i o n . R e c r e a t e O n R e s e t O p e r a t o r C o o r d i n a t o r . r e s e t T o C h e c k p o i n t ( R e c r e a t e O n R e s e t O p e r a t o r C o o r d i n a t o r . j a v a : 143 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . o p e r a t o r s . c o o r d i n a t i o n . O p e r a t o r C o o r d i n a t o r H o l d e r . r e s e t T o C h e c k p o i n t ( O p e r a t o r C o o r d i n a t o r H o l d e r . j a v a : 284 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . c h e c k p o i n t . C h e c k p o i n t C o o r d i n a t o r . r e s t o r e S t a t e T o C o o r d i n a t o r s ( C h e c k p o i n t C o o r d i n a t o r . j a v a : 2044 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . c h e c k p o i n t . C h e c k p o i n t C o o r d i n a t o r . r e s t o r e L a t e s t C h e c k p o i n t e d S t a t e I n t e r n a l ( C h e c k p o i n t C o o r d i n a t o r . j a v a : 1719 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . c h e c k p o i n t . C h e c k p o i n t C o o r d i n a t o r . r e s t o r e L a t e s t C h e c k p o i n t e d S t a t e T o A l l ( C h e c k p o i n t C o o r d i n a t o r . j a v a : 1647 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . s c h e d u l e r . S c h e d u l e r B a s e . r e s t o r e S t a t e ( S c h e d u l e r B a s e . j a v a : 434 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . s c h e d u l e r . D e f a u l t S c h e d u l e r . r e s t a r t T a s k s ( D e f a u l t S c h e d u l e r . j a v a : 419 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . s c h e d u l e r . D e f a u l t S c h e d u l e r . l a m b d a 7(RecreateOnResetOperatorCoordinator.java:156) at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774) at java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:792) at java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2153) at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.resetToCheckpoint(RecreateOnResetOperatorCoordinator.java:143) at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.resetToCheckpoint(OperatorCoordinatorHolder.java:284) at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreStateToCoordinators(CheckpointCoordinator.java:2044) at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedStateInternal(CheckpointCoordinator.java:1719) at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedStateToAll(CheckpointCoordinator.java:1647) at org.apache.flink.runtime.scheduler.SchedulerBase.restoreState(SchedulerBase.java:434) at org.apache.flink.runtime.scheduler.DefaultScheduler.restartTasks(DefaultScheduler.java:419) at org.apache.flink.runtime.scheduler.DefaultScheduler.lambda 7(RecreateOnResetOperatorCoordinator.java:156)atjava.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)atjava.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:792)atjava.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2153)atorg.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.resetToCheckpoint(RecreateOnResetOperatorCoordinator.java:143)atorg.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.resetToCheckpoint(OperatorCoordinatorHolder.java:284)atorg.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreStateToCoordinators(CheckpointCoordinator.java:2044)atorg.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedStateInternal(CheckpointCoordinator.java:1719)atorg.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedStateToAll(CheckpointCoordinator.java:1647)atorg.apache.flink.runtime.scheduler.SchedulerBase.restoreState(SchedulerBase.java:434)atorg.apache.flink.runtime.scheduler.DefaultScheduler.restartTasks(DefaultScheduler.java:419)atorg.apache.flink.runtime.scheduler.DefaultScheduler.lambdanull 2 ( D e f a u l t S c h e d u l e r . j a v a : 379 ) a t j a v a . u t i l . c o n c u r r e n t . C o m p l e t a b l e F u t u r e . u n i R u n ( C o m p l e t a b l e F u t u r e . j a v a : 719 ) a t j a v a . u t i l . c o n c u r r e n t . C o m p l e t a b l e F u t u r e 2(DefaultScheduler.java:379) at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:719) at java.util.concurrent.CompletableFuture 2(DefaultScheduler.java:379)atjava.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:719)atjava.util.concurrent.CompletableFutureUniRun.tryFire(CompletableFuture.java:701)
at java.util.concurrent.CompletableFuture C o m p l e t i o n . r u n ( C o m p l e t a b l e F u t u r e . j a v a : 456 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . r p c . p e k k o . P e k k o R p c A c t o r . l a m b d a Completion.run(CompletableFuture.java:456) at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda Completion.run(CompletableFuture.java:456)atorg.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambdahandleRunAsync 4 ( P e k k o R p c A c t o r . j a v a : 451 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . c o n c u r r e n t . C l a s s L o a d i n g U t i l s . r u n W i t h C o n t e x t C l a s s L o a d e r ( C l a s s L o a d i n g U t i l s . j a v a : 68 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . r p c . p e k k o . P e k k o R p c A c t o r . h a n d l e R u n A s y n c ( P e k k o R p c A c t o r . j a v a : 451 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . r p c . p e k k o . P e k k o R p c A c t o r . h a n d l e R p c M e s s a g e ( P e k k o R p c A c t o r . j a v a : 218 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . r p c . p e k k o . F e n c e d P e k k o R p c A c t o r . h a n d l e R p c M e s s a g e ( F e n c e d P e k k o R p c A c t o r . j a v a : 85 ) a t o r g . a p a c h e . f l i n k . r u n t i m e . r p c . p e k k o . P e k k o R p c A c t o r . h a n d l e M e s s a g e ( P e k k o R p c A c t o r . j a v a : 168 ) a t o r g . a p a c h e . p e k k o . j a p i . p f . U n i t C a s e S t a t e m e n t . a p p l y ( C a s e S t a t e m e n t s . s c a l a : 33 ) a t o r g . a p a c h e . p e k k o . j a p i . p f . U n i t C a s e S t a t e m e n t . a p p l y ( C a s e S t a t e m e n t s . s c a l a : 29 ) a t s c a l a . P a r t i a l F u n c t i o n . a p p l y O r E l s e ( P a r t i a l F u n c t i o n . s c a l a : 127 ) a t s c a l a . P a r t i a l F u n c t i o n . a p p l y O r E l s e 4(PekkoRpcActor.java:451) at org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68) at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRunAsync(PekkoRpcActor.java:451) at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:218) at org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:85) at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:168) at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) at scala.PartialFunction.applyOrElse(PartialFunction.scala:127) at scala.PartialFunction.applyOrElse 4(PekkoRpcActor.java:451)atorg.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)atorg.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRunAsync(PekkoRpcActor.java:451)atorg.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:218)atorg.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:85)atorg.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:168)atorg.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33)atorg.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29)atscala.PartialFunction.applyOrElse(PartialFunction.scala:127)atscala.PartialFunction.applyOrElse(PartialFunction.scala:126)
at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29)
at scala.PartialFunction O r E l s e . a p p l y O r E l s e ( P a r t i a l F u n c t i o n . s c a l a : 175 ) a t s c a l a . P a r t i a l F u n c t i o n OrElse.applyOrElse(PartialFunction.scala:175) at scala.PartialFunction OrElse.applyOrElse(PartialFunction.scala:175)atscala.PartialFunctionOrElse.applyOrElse(PartialFunction.scala:176)
at scala.PartialFunction O r E l s e . a p p l y O r E l s e ( P a r t i a l F u n c t i o n . s c a l a : 176 ) a t o r g . a p a c h e . p e k k o . a c t o r . A c t o r . a r o u n d R e c e i v e ( A c t o r . s c a l a : 547 ) a t o r g . a p a c h e . p e k k o . a c t o r . A c t o r . a r o u n d R e c e i v e OrElse.applyOrElse(PartialFunction.scala:176) at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) at org.apache.pekko.actor.Actor.aroundReceive OrElse.applyOrElse(PartialFunction.scala:176)atorg.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547)atorg.apache.pekko.actor.Actor.aroundReceive(Actor.scala:545)
at org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229)
at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590)
at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557)
at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280)
at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241)
at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1067)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1703)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:172)
Caused by: org.apache.flink.table.api.ValidationException: The MySQL server has a timezone offset (0 seconds ahead of UTC) which does not match the configured timezone Asia/Shanghai. Specify the right server-time-zone to avoid inconsistencies for time-related fields.
at com.ververica.cdc.connectors.mysql.MySqlValidator.checkTimeZone(MySqlValidator.java:184)
at com.ververica.cdc.connectors.mysql.MySqlValidator.validate(MySqlValidator.java:72)
at com.ververica.cdc.connectors.mysql.source.MySqlSource.createEnumerator(MySqlSource.java:197)
at org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:221)
… 42 common frames omitted

查看mysql:
show variables like ‘%time_zone%’;
在这里插入图片描述
解决方案:

SET time_zone = 'Asia/Shanghai';
SET @@global.time_zone = 'Asia/Shanghai';
#再次查看
SELECT @@global.time_zone;
show variables like '%time_zone%';

在这里插入图片描述

参考资料

源码:https://github.com/ververica/flink-cdc-connectors
文档:https://ververica.github.io/flink-cdc-connectors/master/content/overview/cdc-connectors.html
官网:https://ververica.github.io/flink-cdc-connectors/

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/678914.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【python】网络爬虫与信息提取--requests库

导学 当一个软件想获得数据&#xff0c;那么我们只有把网站当成api就可以 requests库:自动爬取HTML页面&#xff0c;自动网络请求提交 robots协议&#xff1a;网络爬虫排除标准&#xff08;网络爬虫的规则&#xff09; beautiful soup库&#xff1a;解析HTML页面 工具&…

算法---回溯(正文)

1.什么是回溯&#xff1f; 回溯算法的定义就是和暴力枚举一样枚举所有可能并加撤回&#xff0c;也能和暴力一样去掉一些重复&#xff08;在之前就被筛出&#xff0c;但还要枚举这个&#xff0c;我们可以跳过这个了---------这个就是回溯剪枝&#xff09;。但为什么回溯不是暴力…

【精选】java多态进阶——多态练习测试

&#x1f36c; 博主介绍&#x1f468;‍&#x1f393; 博主介绍&#xff1a;大家好&#xff0c;我是 hacker-routing &#xff0c;很高兴认识大家~ ✨主攻领域&#xff1a;【渗透领域】【应急响应】 【python】 【VulnHub靶场复现】【面试分析】 &#x1f389;点赞➕评论➕收藏…

Mac上新版InfluxDB使用教程

一、简介 官网&#xff1a;influxdb 二、influxdb安装 建议使用Homebrew在 macOS 上安装 InfluxDB v2&#xff1a; brew install influxdb启动influxdb服务&#xff1a;brew services start influxdb 停止influxdb服务&#xff1a;brew services stop influxdb 查看是否启…

使用R语言fifer包进行分层采样

使用R语言fifer包中的stratified()函数用来进行分层采样非常方便&#xff0c;但fifer包已经从CRAN存储库中删除&#xff0c;需要从存档中下载可用的历史版本&#xff0c;下载链接&#xff1a;Index of /src/contrib/Archive/fifer (r-project.org)https://cran.r-project.org/s…

opencv 图像色彩空间转化

今天看了b站贾志刚的课&#xff0c;觉得不错&#xff0c;特地做学习笔记来和小伙伴分享 贾志刚的这个好像是2.0版本,30小时的,语言更加精炼,适合初级入门学习 第一节是常规安装 看他的步骤装就行了,记得配置完点应用再点确定,我第一次就是 没点然后就失败了,又得重配置一次…

python+flask+django农产品供销展销电子商务系统lkw43

供销社农产品展销系统的设计与实现&#xff0c;最主要的是满足使用者的使用需求&#xff0c;并且可以向使用者提供一些与系统配套的服务。本篇论文主要从实际出发&#xff0c;采用以对象为设计重点的设计方法&#xff0c;因此在进行系统总体的需求分时借助用例图可以更好的阐述…

电缆线的阻抗50Ω,真正含义是什么?

当我们提到电缆线的阻抗时&#xff0c;它到底是什么意思&#xff1f;RG58电缆通常指的是50Ω的电缆线。它的真正含义是什么&#xff1f;假如取一段3英尺(0.9144米)长的RG58电缆线&#xff0c;并且在前端测量信号路径与返回路径之间的阻抗。那么测得的阻抗是多少&#xff1f;当然…

使用UMAP降维可视化RAG嵌入

大型语言模型&#xff08;LLMs&#xff09;如 GPT-4 已经展示了出色的文本理解和生成能力。但它们在处理领域特定信息方面面临挑战&#xff0c;比如当查询超出训练数据范围时&#xff0c;它们会产生错误的答案。LLMs 的推理过程也缺乏透明度&#xff0c;使用户难以理解达成结论…

指针的基本含义及其用法

1.前言 在学习C语言的时候&#xff0c;我们会经常接触一个概念&#xff0c;指针和地址&#xff0c;关于这两个概念很多人并不能理解地十分透彻&#xff0c;接下来我将详细介绍一下这两者的概念 2.地址 我们知道计算机的上CPU&#xff08;中央处理器&#xff09;在处理数据的时…

C++重新入门-循环

目录 1.循环类型 while循环&#xff1a; for循环 基于范围的for循环(C11) do...while 循环 2.循环控制语句 3.无限循环 有的时候&#xff0c;可能需要多次执行同一块代码。一般情况下&#xff0c;语句是顺序执行的&#xff1a;函数中的第一个语句先执行&#xff0c;接着…

AI:126-基于深度学习的人体情绪识别与分析

🚀点击这里跳转到本专栏,可查阅专栏顶置最新的指南宝典~ 🎉🎊🎉 你的技术旅程将在这里启航! 从基础到实践,深入学习。无论你是初学者还是经验丰富的老手,对于本专栏案例和项目实践都有参考学习意义。 ✨✨✨ 每一个案例都附带有在本地跑过的关键代码,详细讲解供…

react函数组件中使用context

效果 1.在父组件中创建一个createcontext并将他导出 import React, { createContext } from react import Bpp from ./Bpp import Cpp from ./Cpp export let MyContext createContext(我是组件B) export let Ccontext createContext(我是组件C)export default function App…

使用client-only 解决组件不兼容SSR问题

目录 前言 一、解决方案 1.基于Nuxt 框架的SSR应用 2.基于vue2框架的应用 3.基于vue3框架的应用 二、总结 往期回顾 前言 最近在我的单页面SSR应用上开发JSON编辑器功能&#xff0c;在引入组件后直接客户端跳转OK&#xff0c;但是在直接加载服务端渲染的时候一直报这…

Netty应用(六) 之 异步 Channel

目录 12.Netty异步的相关概念 12.1 异步编程的概念 12.2 方式1&#xff1a;主线程阻塞&#xff0c;等待异步线程完成调用&#xff0c;然后主线程发起请求IO 12.3 方式2&#xff1a;主线程注册异步线程&#xff0c;异步线程去回调发起请求IO 12.4 细节注释 12.5 异步的好处…

HiveSQL——用户行为路径分析

注&#xff1a;参考文档&#xff1a; SQL之用户行为路径分析--HQL面试题46【拼多多面试题】_路径分析 sql-CSDN博客文章浏览阅读2k次&#xff0c;点赞6次&#xff0c;收藏19次。目录0 问题描述1 数据分析2 小结0 问题描述已知用户行为表 tracking_log&#xff0c; 大概字段有&…

2.12作业

程序代码&#xff1a; #include<stdlib.h> #include<string.h> #include<stdio.h>//递归实现n! int n(int element) {if(0element)return 1;return element*n(element-1); }//递归实现0-n的和 int sub_sum(int element) {if(0element)return 0;return eleme…

计算机二级C语言备考学习记录

一、C语言程序的结构 1.程序的构成&#xff0c;main函数和其他函数。 程序是由main函数和其他函数构成main作为主函数&#xff0c;一个C程序里只有一个main函数其他函数可以分为系统函数和用户函数&#xff0c;系统函数为编译系统提供&#xff0c;用户函数由用户自行编写 2.…

《动手学深度学习(PyTorch版)》笔记7.7

注&#xff1a;书中对代码的讲解并不详细&#xff0c;本文对很多细节做了详细注释。另外&#xff0c;书上的源代码是在Jupyter Notebook上运行的&#xff0c;较为分散&#xff0c;本文将代码集中起来&#xff0c;并加以完善&#xff0c;全部用vscode在python 3.9.18下测试通过&…

重温阿里云宝塔面板部署前后端项目

首先祝大家新年快乐啊&#xff01; 回到老家&#xff0c;便打算趁这一段空闲时间提升一下自己&#xff0c;重点是学习实践一下echarts相关内容&#xff0c;很多公司项目都需要实现可视化&#xff0c;所以在bilibili上找了黑马的一个教程开始学习&#xff0c;不同的是&#xff…