Hudi Flink SQL源码调试学习(1)

前言

本着学习hudi-flink源码的目的,利用之前总结的文章Hudi Flink SQL代码示例及本地调试中的代码进行调试,记录调试学习过程中主要的步骤及对应源码片段。

版本

  • Flink 1.15.4
  • Hudi 0.13.0

目标

在文章Hudi Flink SQL代码示例及本地调试中提到:我们发现Table API的入口和DataStream API的入口差不多,DataStream API的入口是在HoodiePipelinesinksource方法里,而这两个方法也是分别调用了HoodieTableFactorycreateDynamicTableSinkcreateDynamicTableSource。那么Table API的代码怎么一步一步走到createDynamicTableSinkcreateDynamicTableSource的呢?返回HoodieTableSink之后又是怎么写数据的?因为我发现Hudi写数据的主要逻辑入口好像是在HoodieTableSink.getSinkRuntimeProvider的方法体里,这些问题之前都没有搞清楚,所以这次的目标就是要搞清楚:1、Table API 的入口到createDynamicTableSink返回HoodieTableSink的主要代码步骤; 2、在哪里调用HoodieTableSink.getSinkRuntimeProvider的方法体进行后面的写Hudi逻辑的

相关类:

  • HoodiePipeline (DataStream API)
  • HoodieTableFactory
  • HoodieTableSink
  • DataStreamSinkProviderAdapter (函数式接口)
  • TableEnvironmentImpl
  • BatchPlanner
  • PlannerBase
  • FactoryUtil
  • BatchExecSink
  • CommonExecSink

DataStream API

其实上面的问题在DataStream API代码里很容易看出来,我们先看一下DataStream API写Hudi的代码,详细代码在文章:Flink Hudi DataStream API代码示例

DataStream<RowData> dataStream = env.fromElements(GenericRowData.of(1, StringData.fromString("hudi1"), 1.1, 1000L, StringData.fromString("2023-04-07")),GenericRowData.of(2, StringData.fromString("hudi2"), 2.2, 2000L, StringData.fromString("2023-04-08"))
);HoodiePipeline.Builder builder = HoodiePipeline.builder(targetTable).column("id int").column("name string").column("price double").column("ts bigint").column("dt string").pk("id").partition("dt").options(options);builder.sink(dataStream, false);

HoodiePipeline.Builder.sink

    public DataStreamSink<?> sink(DataStream<RowData> input, boolean bounded) {TableDescriptor tableDescriptor = getTableDescriptor();return HoodiePipeline.sink(input, tableDescriptor.getTableId(), tableDescriptor.getResolvedCatalogTable(), bounded);}

HoodiePipeline.sink

  private static DataStreamSink<?> sink(DataStream<RowData> input, ObjectIdentifier tablePath, ResolvedCatalogTable catalogTable, boolean isBounded) {FactoryUtil.DefaultDynamicTableContext context = Utils.getTableContext(tablePath, catalogTable, Configuration.fromMap(catalogTable.getOptions()));HoodieTableFactory hoodieTableFactory = new HoodieTableFactory();return ((DataStreamSinkProvider) hoodieTableFactory.createDynamicTableSink(context).getSinkRuntimeProvider(new SinkRuntimeProviderContext(isBounded))).consumeDataStream(input);}

HoodiePipeline.sink就可以找到答案:
1、HoodieTableFactory.createDynamicTableSink 返回HoodieTableSink
2、HoodieTableSink.getSinkRuntimeProvider 返回DataStreamSinkProviderAdapter
3、DataStreamSinkProviderAdapter.consumeDataStream调用HoodieTableSink.getSinkRuntimeProvider中的方法体执行后面的写Hudi逻辑。这里的dataStream为我们最开始在程序里创建的DataStream<RowData>

HoodieTableSink.getSinkRuntimeProvider

getSinkRuntimeProvider返回DataStreamSinkProviderAdapter,其中Lambda 表达式dataStream -> {}DataStreamSinkProviderAdapter.consumeDataStream(dataStream)的具体实现

  @Overridepublic SinkRuntimeProvider getSinkRuntimeProvider(Context context) {return (DataStreamSinkProviderAdapter) dataStream -> {// setup configurationlong ckpTimeout = dataStream.getExecutionEnvironment().getCheckpointConfig().getCheckpointTimeout();conf.setLong(FlinkOptions.WRITE_COMMIT_ACK_TIMEOUT, ckpTimeout);// set up default parallelismOptionsInference.setupSinkTasks(conf, dataStream.getExecutionConfig().getParallelism());RowType rowType = (RowType) schema.toSinkRowDataType().notNull().getLogicalType();// bulk_insert modefinal String writeOperation = this.conf.get(FlinkOptions.OPERATION);if (WriteOperationType.fromValue(writeOperation) == WriteOperationType.BULK_INSERT) {return Pipelines.bulkInsert(conf, rowType, dataStream);}// Append modeif (OptionsResolver.isAppendMode(conf)) {DataStream<Object> pipeline = Pipelines.append(conf, rowType, dataStream, context.isBounded());if (OptionsResolver.needsAsyncClustering(conf)) {return Pipelines.cluster(conf, rowType, pipeline);} else {return Pipelines.dummySink(pipeline);}}DataStream<Object> pipeline;// bootstrapfinal DataStream<HoodieRecord> hoodieRecordDataStream =Pipelines.bootstrap(conf, rowType, dataStream, context.isBounded(), overwrite);// write pipelinepipeline = Pipelines.hoodieStreamWrite(conf, hoodieRecordDataStream);// compactionif (OptionsResolver.needsAsyncCompaction(conf)) {// use synchronous compaction for bounded source.if (context.isBounded()) {conf.setBoolean(FlinkOptions.COMPACTION_ASYNC_ENABLED, false);}return Pipelines.compact(conf, pipeline);} else {return Pipelines.clean(conf, pipeline);}};}

DataStreamSinkProviderAdapter其实是一个函数式接口,它是一种只包含一个抽象方法的接口。Lambda 表达式可以被赋值给一个函数式接口,从而实现接口的实例化

public interface DataStreamSinkProviderAdapter extends DataStreamSinkProvider {DataStreamSink<?> consumeDataStream(DataStream<RowData> dataStream);@Overridedefault DataStreamSink<?> consumeDataStream(ProviderContext providerContext, DataStream<RowData> dataStream) {return consumeDataStream(dataStream);}
}

函数式接口和Lambda 表达式参考下面两篇文章:
https://it.sohu.com/a/682888110_100123073
https://blog.csdn.net/Speechless_/article/details/123746047

Table API

知道了 DataStream API 调用步骤后,来对比看一下 Table API 的大致调用步骤,调试代码入口。

tableEnv.executeSql(String.format("insert into %s values (1,'hudi',10,100,'2023-05-28')", tableName));

整体调用流程

1、tableEnv.executeSql->TableEnvironmentImpl.executeSql->executeInternal(Operation operation)->executeInternal(List<ModifyOperation> operations)->this.translate->(PlannerBase)this.planner.translate

2.1、PlannerBase.translate->PlannerBase.translateToRel->getTableSink(catalogSink.getContextResolvedTable, dynamicOptions)->FactoryUtil.createDynamicTableSink->HoodieTableFactory.createDynamicTableSink

2.2、PlannerBase.translate->(BatchPlanner)translateToPlan(execGraph)->(ExecNodeBase)node.translateToPlan->(BatchExecSink)translateToPlanInternal->(CommonExecSink)createSinkTransformation->(HoodieTableSink)getSinkRuntimeProvider->(CommonExecSink)applySinkProvider->provider.consumeDataStream

具体代码

TableEnvironmentImpl

(TableEnvironmentImpl)executeSql

    public TableResult executeSql(String statement) {List<Operation> operations = this.getParser().parse(statement);if (operations.size() != 1) {throw new TableException("Unsupported SQL query! executeSql() only accepts a single SQL statement of type CREATE TABLE, DROP TABLE, ALTER TABLE, CREATE DATABASE, DROP DATABASE, ALTER DATABASE, CREATE FUNCTION, DROP FUNCTION, ALTER FUNCTION, CREATE CATALOG, DROP CATALOG, USE CATALOG, USE [CATALOG.]DATABASE, SHOW CATALOGS, SHOW DATABASES, SHOW TABLES, SHOW [USER] FUNCTIONS, SHOW PARTITIONSCREATE VIEW, DROP VIEW, SHOW VIEWS, INSERT, DESCRIBE, LOAD MODULE, UNLOAD MODULE, USE MODULES, SHOW [FULL] MODULES.");} else {// 关键步骤:executeInternalreturn this.executeInternal((Operation)operations.get(0));}}

executeInternal(Operation operation)

    public TableResultInternal executeInternal(Operation operation) {if (operation instanceof ModifyOperation) {// 关键步骤:executeInternalreturn this.executeInternal(Collections.singletonList((ModifyOperation)operation));} else if (operation instanceof StatementSetOperation) {return this.executeInternal(((StatementSetOperation)operation).getOperations());

executeInternal(List<ModifyOperation> operations)

    public TableResultInternal executeInternal(List<ModifyOperation> operations) {// 关键步骤:translateList<Transformation<?>> transformations = this.translate(operations);List<String> sinkIdentifierNames = this.extractSinkIdentifierNames(operations);TableResultInternal result = this.executeInternal(transformations, sinkIdentifierNames);if ((Boolean)this.tableConfig.get(TableConfigOptions.TABLE_DML_SYNC)) {try {result.await();} catch (ExecutionException | InterruptedException var6) {result.getJobClient().ifPresent(JobClient::cancel);throw new TableException("Fail to wait execution finish.", var6);}}return result;}

translate
这里的planner为BatchPlanner,因为我们设置了batch模式EnvironmentSettings.inBatchMode()

    protected List<Transformation<?>> translate(List<ModifyOperation> modifyOperations) {// 这里的planner为BatchPlanner,因为我们设置了batch模式EnvironmentSettings.inBatchMode()// 关键步骤:PlannerBase.translatereturn this.planner.translate(modifyOperations);}

BatchPlanner

(BatchPlanner的父类)PlannerBase.translate

  override def translate(modifyOperations: util.List[ModifyOperation]): util.List[Transformation[_]] = {beforeTranslation()if (modifyOperations.isEmpty) {return List.empty[Transformation[_]]}// 关键步骤:translateToRelval relNodes = modifyOperations.map(translateToRel)val optimizedRelNodes = optimize(relNodes)val execGraph = translateToExecNodeGraph(optimizedRelNodes, isCompiled = false)// 关键步骤:translateToPlanval transformations = translateToPlan(execGraph)afterTranslation()transformations}

PlannerBase.translateToRel

  private[flink] def translateToRel(modifyOperation: ModifyOperation): RelNode = {val dataTypeFactory = catalogManager.getDataTypeFactorymodifyOperation match {case s: UnregisteredSinkModifyOperation[_] =>val input = getRelBuilder.queryOperation(s.getChild).build()val sinkSchema = s.getSink.getTableSchema// validate query schema and sink schema, and apply cast if possibleval query = validateSchemaAndApplyImplicitCast(input,catalogManager.getSchemaResolver.resolve(sinkSchema.toSchema),null,dataTypeFactory,getTypeFactory)LogicalLegacySink.create(query,s.getSink,"UnregisteredSink",ConnectorCatalogTable.sink(s.getSink, !isStreamingMode))case collectModifyOperation: CollectModifyOperation =>val input = getRelBuilder.queryOperation(modifyOperation.getChild).build()DynamicSinkUtils.convertCollectToRel(getRelBuilder,input,collectModifyOperation,getTableConfig,getFlinkContext.getClassLoader)case catalogSink: SinkModifyOperation =>val input = getRelBuilder.queryOperation(modifyOperation.getChild).build()val dynamicOptions = catalogSink.getDynamicOptions// 关键步骤:getTableSinkgetTableSink(catalogSink.getContextResolvedTable, dynamicOptions).map {case (table, sink: TableSink[_]) =>// Legacy tables can't be anonymousval identifier = catalogSink.getContextResolvedTable.getIdentifier// check the logical field type and physical field type are compatibleval queryLogicalType = FlinkTypeFactory.toLogicalRowType(input.getRowType)// validate logical schema and physical schema are compatiblevalidateLogicalPhysicalTypesCompatible(table, sink, queryLogicalType)// validate TableSinkvalidateTableSink(catalogSink, identifier, sink, table.getPartitionKeys)// validate query schema and sink schema, and apply cast if possibleval query = validateSchemaAndApplyImplicitCast(input,table.getResolvedSchema,identifier.asSummaryString,dataTypeFactory,getTypeFactory)val hints = new util.ArrayList[RelHint]if (!dynamicOptions.isEmpty) {hints.add(RelHint.builder("OPTIONS").hintOptions(dynamicOptions).build)}LogicalLegacySink.create(query,hints,sink,identifier.toString,table,catalogSink.getStaticPartitions.toMap)case (table, sink: DynamicTableSink) =>DynamicSinkUtils.convertSinkToRel(getRelBuilder, input, catalogSink, sink)} match {case Some(sinkRel) => sinkRelcase None =>throw new TableException(s"Sink '${catalogSink.getContextResolvedTable}' does not exists")}

PlannerBase.getTableSink

  private def getTableSink(contextResolvedTable: ContextResolvedTable,dynamicOptions: JMap[String, String]): Option[(ResolvedCatalogTable, Any)] = {contextResolvedTable.getTable[CatalogBaseTable] match {case connectorTable: ConnectorCatalogTable[_, _] =>val resolvedTable = contextResolvedTable.getResolvedTable[ResolvedCatalogTable]toScala(connectorTable.getTableSink) match {case Some(sink) => Some(resolvedTable, sink)case None => None}case regularTable: CatalogTable =>val resolvedTable = contextResolvedTable.getResolvedTable[ResolvedCatalogTable]...if (!contextResolvedTable.isAnonymous &&TableFactoryUtil.isLegacyConnectorOptions(catalogManager.getCatalog(objectIdentifier.getCatalogName).orElse(null),tableConfig,isStreamingMode,objectIdentifier,resolvedTable.getOrigin,isTemporary)) {...} else {...// 关键步骤:FactoryUtil.createDynamicTableSinkval tableSink = FactoryUtil.createDynamicTableSink(factory,objectIdentifier,tableToFind,Collections.emptyMap(),getTableConfig,getFlinkContext.getClassLoader,isTemporary)Option(resolvedTable, tableSink)}case _ => None}

FactoryUtil.createDynamicTableSink

根据’connector’=‘hudi’ 找到factory为org.apache.hudi.table.HoodieTableFactory,接着调用HoodieTableFactory.createDynamicTableSink

    public static DynamicTableSink createDynamicTableSink(@Nullable DynamicTableSinkFactory preferredFactory,ObjectIdentifier objectIdentifier,ResolvedCatalogTable catalogTable,Map<String, String> enrichmentOptions,ReadableConfig configuration,ClassLoader classLoader,boolean isTemporary) {final DefaultDynamicTableContext context =new DefaultDynamicTableContext(objectIdentifier,catalogTable,enrichmentOptions,configuration,classLoader,isTemporary);try {// 'connector'='hudi' // org.apache.hudi.table.HoodieTableFactoryfinal DynamicTableSinkFactory factory =preferredFactory != null? preferredFactory: discoverTableFactory(DynamicTableSinkFactory.class, context);// 关键步骤:HoodieTableFactory.createDynamicTableSinkreturn factory.createDynamicTableSink(context);} catch (Throwable t) {throw new ValidationException(String.format("Unable to create a sink for writing table '%s'.\n\n"+ "Table options are:\n\n"+ "%s",objectIdentifier.asSummaryString(),catalogTable.getOptions().entrySet().stream().map(e -> stringifyOption(e.getKey(), e.getValue())).sorted().collect(Collectors.joining("\n"))),t);}}

HoodieTableFactory.createDynamicTableSink

第一个问题解决

  public DynamicTableSink createDynamicTableSink(Context context) {Configuration conf = FlinkOptions.fromMap(context.getCatalogTable().getOptions());checkArgument(!StringUtils.isNullOrEmpty(conf.getString(FlinkOptions.PATH)),"Option [path] should not be empty.");setupTableOptions(conf.getString(FlinkOptions.PATH), conf);ResolvedSchema schema = context.getCatalogTable().getResolvedSchema();sanityCheck(conf, schema);setupConfOptions(conf, context.getObjectIdentifier(), context.getCatalogTable(), schema);// 关键步骤:HoodieTableSinkreturn new HoodieTableSink(conf, schema);}

BatchExecSink

回到方法PlannerBase.translate,它会在后面调用translateToPlanexecGraph.getRootNodes返回的内容为BatchExecSink (想知道为啥是BatchExecSink,可以看PlannerBase.translate中调用的translateToExecNodeGraph方法),
BatchExecSinkBatchExecNode的子类,所以会执行node.translateToPlan

PlannerBase.translateToPlan

  override protected def translateToPlan(execGraph: ExecNodeGraph): util.List[Transformation[_]] = {beforeTranslation()val planner = createDummyPlanner()val transformations = execGraph.getRootNodes.map {// BatchExecSink// 关键步骤:ExecNodeBase.translateToPlancase node: BatchExecNode[_] => node.translateToPlan(planner)case _ =>throw new TableException("Cannot generate BoundedStream due to an invalid logical plan. " +"This is a bug and should not happen. Please file an issue.")}afterTranslation()transformations}

BatchExecSink

public class BatchExecSink extends CommonExecSink implements BatchExecNode<Object> {...
public abstract class CommonExecSink extends ExecNodeBase<Object>implements MultipleTransformationTranslator<Object> {...

ExecNodeBase.translateToPlan

    public final Transformation<T> translateToPlan(Planner planner) {if (transformation == null) {transformation =// 关键步骤:BatchExecSink.translateToPlanInternaltranslateToPlanInternal((PlannerBase) planner,ExecNodeConfig.of(((PlannerBase) planner).getTableConfig(),persistedConfig,isCompiled));if (this instanceof SingleTransformationTranslator) {if (inputsContainSingleton()) {transformation.setParallelism(1);transformation.setMaxParallelism(1);}}}return transformation;}

BatchExecSink.translateToPlanInternal

    protected Transformation<Object> translateToPlanInternal(PlannerBase planner, ExecNodeConfig config) {final Transformation<RowData> inputTransform =(Transformation<RowData>) getInputEdges().get(0).translateToPlan(planner);// org.apache.hudi.table.HoodieTableSink        final DynamicTableSink tableSink = tableSinkSpec.getTableSink(planner.getFlinkContext());// 关键步骤:CommonExecSink.createSinkTransformationreturn createSinkTransformation(planner.getExecEnv(), config, inputTransform, tableSink, -1, false);}

CommonExecSink.createSinkTransformation

这里的tableSink为HoodieTableSink,会调用HoodieTableSink的getSinkRuntimeProvider方法返回runtimeProvider(没有执行里面的方法体)

protected Transformation<Object> createSinkTransformation(StreamExecutionEnvironment streamExecEnv,ExecNodeConfig config,Transformation<RowData> inputTransform,// 这里的tableSink为HoodieTableSinkDynamicTableSink tableSink,int rowtimeFieldIndex,boolean upsertMaterialize) {final ResolvedSchema schema = tableSinkSpec.getContextResolvedTable().getResolvedSchema();final SinkRuntimeProvider runtimeProvider =// 关键步骤:HoodieTableSink.getSinkRuntimeProvidertableSink.getSinkRuntimeProvider(new SinkRuntimeProviderContext(isBounded));final RowType physicalRowType = getPhysicalRowType(schema);final int[] primaryKeys = getPrimaryKeyIndices(physicalRowType, schema);final int sinkParallelism = deriveSinkParallelism(inputTransform, runtimeProvider);final int inputParallelism = inputTransform.getParallelism();final boolean inputInsertOnly = inputChangelogMode.containsOnly(RowKind.INSERT);final boolean hasPk = primaryKeys.length > 0;...return (Transformation<Object>)// 关键步骤:CommonExecSink.applySinkProviderapplySinkProvider(sinkTransform,streamExecEnv,runtimeProvider,rowtimeFieldIndex,sinkParallelism,config);}

CommonExecSink.applySinkProvider

先通过new DataStream<>(env, sinkTransformation)生成dataStream,接着通过执行provider.consumeDataStream调用HoodieTableSink.getSinkRuntimeProvider中的方法体,这里的provider为HoodieTableSink.getSinkRuntimeProvider返回的DataStreamSinkProviderAdapter

    private Transformation<?> applySinkProvider(Transformation<RowData> inputTransform,StreamExecutionEnvironment env,SinkRuntimeProvider runtimeProvider,int rowtimeFieldIndex,int sinkParallelism,ExecNodeConfig config) {TransformationMetadata sinkMeta = createTransformationMeta(SINK_TRANSFORMATION, config);if (runtimeProvider instanceof DataStreamSinkProvider) {Transformation<RowData> sinkTransformation =applyRowtimeTransformation(inputTransform, rowtimeFieldIndex, sinkParallelism, config);// 生成dataStreamfinal DataStream<RowData> dataStream = new DataStream<>(env, sinkTransformation);final DataStreamSinkProvider provider = (DataStreamSinkProvider) runtimeProvider;// 关键步骤:provider.consumeDataStreamreturn provider.consumeDataStream(createProviderContext(config), dataStream).getTransformation();} else if (runtimeProvider instanceof TransformationSinkProvider) {...

provider.consumeDataStream(已经在上面的类DataStreamSinkProviderAdapter提过)

它会调用HoodieTableSink.getSinkRuntimeProvider中的方法体(Lambda 表达式)执行后面的写hudi逻辑
第二个问题解决

  default DataStreamSink<?> consumeDataStream(ProviderContext providerContext, DataStream<RowData> dataStream) {return consumeDataStream(dataStream);}

总结

本文主要简单记录了自己调试 Hudi Flink SQL 源码的过程,并没有对源码进行深入的分析(自己水平也不够)。主要目的是为了弄清楚从Table API 的入口到createDynamicTableSink返回HoodieTableSink的主要代码步骤以及在哪里调用HoodieTableSink.getSinkRuntimeProvider的方法体以进行后面的写Hudi逻辑,这样便于后面对Hudi源码的分析和学习。

本文新学习知识点:函数式接口以及对应的 Lambda 表达式的实现

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/15938.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

CK-00靶机详解

CK-00靶机详解 靶场下载地址&#xff1a;https://download.vulnhub.com/ck/CK-00.zip 这个靶场扫描到ip打开后发现主页面css是有问题的&#xff0c;一般这种情况就是没有配置域名解析。 我们网站主页右击查看源代码&#xff0c;发现一个域名。 把域名添加到我们hosts文件中。…

Mac 终端快捷键设置:如何给 Mac 中的 Terminal 设置 Ctrl+Alt+T 快捷键快速启动

Mac 电脑中正常是没有直接打开终端命令行的快捷键指令的&#xff0c;但可以通过 commandspace 打开聚焦搜索&#xff0c;然后输入 ter 或者 terminal 全拼打开。但习惯了 linux 的同学会觉得这个操作很别扭。于是我们希望能通过键盘按键直接打开。 操作流程如下&#xff1a; 1…

MyBatis小记_one

目录 什么是框架 1.框架的概述 2.框架要解决的问题 3. 软件开发的分层重要性 4.分层开发的常见框架 MyBatis 框架概述 JDBC 编程的回顾 JDBC 问题分析 MyBatis 框架快速入门 1.官网下载MyBatis框架jar包 2.搭建MyBatis 开发环境 3. 编写持久层接口的映射文件 IUserD…

HTML笔记(1)

介绍 浏览器中内置了HTML的解析引擎&#xff0c;通过解析标记语言来展现网页&#xff1b;HTML标签都是预定义好的&#xff1b;Java工程师&#xff1a;后台代码的编写&#xff0c;和数据库打交道&#xff0c;把数据给网页前端的工程师&#xff1b;网页前端工程师&#xff1a;写H…

C++信号量与共享内存实现进程间通信

关于信号量和共享内存的相关知识可参考下面链接&#xff1a; 进程间通信方式介绍_夜雨听萧瑟的博客-CSDN博客 C 创建共享内存_c共享内存_夜雨听萧瑟的博客-CSDN博客 信号量SytemV与Posix信号量的介绍与用法_夜雨听萧瑟的博客-CSDN博客 直接上代码&#xff0c;代码如下&#…

记一次Apache HTTP Client问题排查

现象 通过日志查看&#xff0c;存在两种异常情况。第一种&#xff1a;开始的时候HTTP请求会报超时异常。 762663363 [2023-07-21 06:04:25] [executor-64] ERROR - com.xxl.CucmTool - CucmTool|sendRisPortSoap error,url:https://xxxxxx/realtimeservice/services/RisPort o…

【C语言】通讯录2.0 (动态增长版)

前言 通讯录是一种记录联系人信息的工具&#xff0c;包括姓名、电话号码、电子邮件地址、住址等。 文章的一二三章均于上一篇相同&#xff0c;可以直接看第四章改造内容。 此通讯录是基于通讯录1.0&#xff08;静态版&#xff09;的基础上进行改进&#xff0c;请先看系列文字第…

自动化测试:让软件测试更高效更愉快!

谈谈那些实习测试工程师应该掌握的基础知识&#xff08;一&#xff09;_什么时候才能变强的博客-CSDN博客https://blog.csdn.net/qq_17496235/article/details/131839453谈谈那些实习测试工程师应该掌握的基础知识&#xff08;二&#xff09;_什么时候才能变强的博客-CSDN博客h…

css 动画之旋转视差

序&#xff1a;网上看到的一个例子&#xff0c;做一下 效果图&#xff1a; 代码&#xff1a; <style>.content{width: 300px;height: 300px;margin: 139px auto;display: grid;grid-template-columns: repeat(3,1fr);grid-template-rows: repeat(3,1fr);grid-template:…

Python 进阶(六):文件读写(I/O)

❤️ 博客主页&#xff1a;水滴技术 &#x1f338; 订阅专栏&#xff1a;Python 入门核心技术 &#x1f680; 支持水滴&#xff1a;点赞&#x1f44d; 收藏⭐ 留言&#x1f4ac; 文章目录 1. 打开文件2. 读取文件2.1 逐行读取文件2.2 读取所有行 3. 写入文件3.1 向文件中写入…

从0到1开发go-tcp框架【1-搭建server、封装连接与业务绑定、实现基础Router、抽取全局配置文件】

从0到1开发go-tcp框架【1-搭建server、封装连接与业务绑定、实现基础Router】 本期主要完成对Server的搭建、封装连接与业务绑定、实现基础Router&#xff08;处理业务的部分&#xff09;、抽取框架的全局配置文件 从配置文件中读取数据&#xff08;服务器监听端口、监听IP等&a…

汇编语言基础知识

目录 前言&#xff1a; 汇编语言的产生 汇编语言的组成 内存 指令和数据 cpu对内存的读写操作 地址总线 数据总线 控制总线 内存地址空间 前言&#xff1a; 汇编语言是直接在硬件之上工作的 编程语言&#xff0c;我们首先了解硬件系统的机构&#xff0c;才能有效地应用…

C/C++多线程操作

文章目录 多线程C创建线程join 和detachthis_thread线程操作锁lock_guardunique_lock 条件变量 condition_variablewaitwaitfor C语言线程创建线程同步 参考 多线程 传统的C&#xff08;C11标准之前&#xff09;中并没有引入线程这个概念&#xff0c;在C11出来之前&#xff0c…

【编译】gcc make cmake Makefile CMakeList.txt 区别

文章目录 一 关系二 gcc2.1 编译过程2.2 编译参数2.3 静态库和动态库1 后缀名2 联系与区别 2.4 GDB 调试器1 常用命令 三 make、makefile四 cmake、cmakelist4.1 语法特性4.2 重要命令4.2 重要变量4.3 编译流程4.4 两种构建方式 五 Vscode5.0 常用快捷键5.1 界面5.2 插件5.3 .v…

STM32 SPI学习

SPI 串行外设设备接口&#xff08;Serial Peripheral Interface&#xff09;&#xff0c;是一种高速的&#xff0c;全双工&#xff0c;同步的通信总线。 SCK时钟信号由主机发出。 SPI接口主要应用在存储芯片。 SPI相关引脚&#xff1a;MOSI&#xff08;输出数据线&#xff…

1.netty介绍

1.介绍 是JBOSS通过的java开源框架是异步的,基于事件驱动(点击一个按钮调用某个函数)的网络应用框架,高性能高可靠的网络IO程序基于TCP,面向客户端高并发应用/点对点大量数据持续传输的应用是NIO框架 (IO的一层层封装) TCP/IP->javaIO和网络编程–>NIO—>Netty 2.应用…

性能测试必备监控技能windows篇

前言 在手头没有专门的第三方监控时&#xff0c;该怎么监控服务指标呢&#xff1f;本篇就windows下监控进行分享&#xff0c;也是我们在进行性能测试时&#xff0c;必须掌握的。下面我们就windows下常用的三种监视工具进行说明&#xff1a; 任务管理器 资源监视器 性能监视器…

找样机素材,就上这5个网站,免费下载~

设计师经常需要用到各种样机模型来展示直接的作品&#xff0c;今天我就分享几个可以免费下载样机模型的网站&#xff0c;大家赶紧收藏起来&#xff01; 菜鸟图库 https://www.sucai999.com/searchlist/3217.html?vNTYxMjky 菜鸟图库有多种类型的设计素材&#xff0c;像平面、…

Element-plus侧边栏踩坑

问题描述 el-menu直接嵌套el-menu-item菜单&#xff0c;折叠时不会出现文字显示和小箭头无法隐藏的问题&#xff0c;但是实际开发需求中难免需要把el-menu-item封装为组件 解决 vue3项目中嵌套两层template <template><template v-for"item in list" :k…

linux V4L2子系统——v4l2架构(1)之整体架构

概述 V4L&#xff08;Video for Linux&#xff09;是Linux内核中关于视频设备的API接口&#xff0c;涉及视频设备的音频和视频信息采集及处理、视频设备的控制。V4L出现于Linux内核2.1版本&#xff0c;经过修改bug和添加功能&#xff0c;Linux内核2.5版本推出了V4L2&#xff08…