Hudi-IDEA编程

项目

一、Hudi+Spark+Kafka(Scala)

配置详见【1.Scala配置】

依赖详见【1.Hudi+Spark+Kafka依赖】

1-1 构建SparkSession对象

  def main(args: Array[String]): Unit = {//1.构建SparkSession对象val spark: SparkSession = SparkUtils.createSparkSession(this.getClass);//2.从Kafka实时消费数据val kafkaStreamDF: DataFrame = readFromKafka(spark, "order-topic")//3.提取数据,转换数据类型val streamDF: DataFrame = process(kafkaStreamDF);//4.保存数据至Hudi表中:MOR(读取时保存)saveToHudi(streamDF);//5.流式应用启动以后,等待终止spark.streams.active.foreach(query => println(s"Query: ${query.name} is Running ............."))spark.streams.awaitAnyTermination()}

1-2 从Kafka/CSV文件读取数据

  /*** 指定Kafka topic名称,实时消费数据** @param spark* @param topicName* @return*/def readFromKafka(spark: SparkSession, topicName: String): DataFrame = {spark.readStream.format("kafka") //指定Kafka.option("kafka.bootstrap.servers", "node1.itcast.cn:9099") //指定Kafka的服务IP和端口.option("subscribe", topicName) //订阅Kafka的topic的名称.option("startingOffsets", "latest") //从最新消费.option("maxOffsetsPerTrigger", 100000) //每次最多处理10万条数据.option("failOnDataLoss", value = false) //如果数据丢失是否失败.load()}/*** 读取CSV格式文本文件数据,封装到DataFrame数据集*/def readCsvFile(spark: SparkSession, path: String): DataFrame = {spark.read// 设置分隔符为\t.option("sep", "\\t")// 文件首行为列名称.option("header", "true")// 依据数值自动推断数据类型.option("inferSchema", "true")// 指定文件路径.csv(path)}

1-3 ETL转换后存储至Hudi表中

  /*** 对Kafka获取数据,进行转换操作,获取所有字段的值,转换为String,以便保存到Hudi表* @param streamDF* @return*/def process(streamDF: DataFrame): DataFrame = {streamDF//选择字段.selectExpr("CAST(key AS STRING) order_id","CAST(value AS STRING) AS message","topic","partition","offset","timestamp")//解析message数据,提取字段值.withColumn("user_id",get_json_object(col("message"),"$.userId")).withColumn("order_time",get_json_object(col("message"),"$.orderTime")).withColumn("ip",get_json_object(col("message"),"$.ip")).withColumn("order_money",get_json_object(col("message"),"$.orderMoney")).withColumn("order_status",get_json_object(col("message"),"$.orderStatus"))//删除message字段.drop(col("message"))//转换订单日期时间格式为Long类型,作为Hudi表中合并数据字段.withColumn("ts",to_timestamp(col("order_time"),"yyyy-MM-dd HH:mm:ss.SSS"))//订单日期时间提取分区日期:yyyyMMdd.withColumn("day",substring(col("order_time"),0,10))}/*** 将流式数据集DataFrame保存至Hudi表,表类型可选:COW和MOR*/def saveToHudi(streamDF: DataFrame): Unit = {streamDF.writeStream.outputMode(OutputMode.Append()).queryName("query-hudi-streaming")// 针对每微批次数据保存.foreachBatch((batchDF: Dataset[Row], batchId: Long) => {println(s"============== BatchId: ${batchId} start ==============")writeHudiMor(batchDF) // TODO:表的类型MOR}).option("checkpointLocation", "/datas/hudi-spark/struct-ckpt-1001").start()}/*** 将数据集DataFrame保存到Hudi表中,表的类型:MOR(读取时合并)*/def writeHudiMor(dataframe: DataFrame): Unit = {import org.apache.hudi.DataSourceWriteOptions._import org.apache.hudi.config.HoodieWriteConfig._import org.apache.hudi.keygen.constant.KeyGeneratorOptions._dataframe.write.format("hudi").mode(SaveMode.Append)// 表的名称.option(TBL_NAME.key, "tbl_hudi_order")// 设置表的类型.option(TABLE_TYPE.key(), "MERGE_ON_READ")// 每条数据主键字段名称.option(RECORDKEY_FIELD_NAME.key(), "order_id")// 数据合并时,依据时间字段.option(PRECOMBINE_FIELD_NAME.key(), "ts")// 分区字段名称.option(PARTITIONPATH_FIELD_NAME.key(), "day")// 分区值对应目录格式,是否与Hive分区策略一致.option(HIVE_STYLE_PARTITIONING_ENABLE.key(), "true")// 插入数据,产生shuffle时,分区数目.option("hoodie.insert.shuffle.parallelism", "2").option("hoodie.upsert.shuffle.parallelism", "2")// 表数据存储路径.save("/hudi-warehouse/tbl_hudi_order")}

1-4 SparkSQL加载Hudi表数据并分析

  /*** 从Hudi表加载数据,指定数据存在路径*/def readFromHudi(spark: SparkSession, path: String): DataFrame = {// a. 指定路径,加载数据,封装至DataFrameval didiDF: DataFrame = spark.read.format("hudi").load(path);// b. 选择字段didiDF// 选择字段.select("order_id", "product_id", "type", "traffic_type", "pre_total_fee", "start_dest_distance", "departure_time" )}/*** 订单类型统计,字段:product_id*/def reportProduct(dataframe: DataFrame): Unit = {val reportDF: DataFrame = dataframe.groupBy("product_id").count();val to_name = udf((product_id: Int) => {product_id match {case 1 => "滴滴专车"case 2 => "滴滴企业专车"case 3 => "滴滴快车"case 4 => "滴滴企业快车"}})val resultDF: DataFrame = reportDF.select(to_name(col("product_id")).as("order_type"), //col("count").as("total") //)resultDF.printSchema();resultDF.show(10, truncate = false);}

二、Hudi+Flink+Kafka(Java)

依赖详见【2.Hudi+Flink+Kafka依赖】

2-1 从Kafka消费数据

第1步获取表执行环境无需赘述。

第2步创建输入表:指定了Kafka的服务IP和端口、topic等信息,从这里读取数据

第3步中转换数据为Hudi表中需要的格式(添加两个必须字段:数据合并字段ts,分区字段partition_day)

package cn.itcast.hudi;import org.apache.flink.table.api.EnvironmentSettings;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.TableEnvironment;import static org.apache.flink.table.api.Expressions.$;public class FlinkSQLKafkaDemo {public static void main(String[] args) {//1.获取表执行环境EnvironmentSettings settings = EnvironmentSettings.newInstance().inStreamingMode() //流式.build();TableEnvironment tableEnvironment = TableEnvironment.create(settings);//2.创建输入表:从Kafka消费数据tableEnvironment.executeSql("CREATE TABLE order_kafka_source (\n" +"  orderId STRING,\n" +"  userId STRING,\n" +"  orderTime STRING,\n" +"  ip STRING,\n" +"  orderMoney DOUBLE,\n" +"  orderStatus INT\n" +") WITH (\n" +"  'connector' = 'kafka',\n" +"  'topic' = 'order-topic',\n" +"  'properties.bootstrap.servers' = 'node1.itcast.cn:9099',\n" +"  'properties.group.id' = 'gid-1001',\n" +"  'scan.startup.mode' = 'latest-offset',\n" +"  'format' = 'json',\n" +"  'json.fail-on-missing-field' = 'false',\n" +"  'json.ignore-parse-errors' = 'true'\n" +")");//3.转换数据:可以使用SQL,也可以是Table apiTable table = tableEnvironment.from("order_kafka_source")//添加字段:hudi表数据合并字段,"orderId":"20211122103434136000001" -> 20211122103434136.addColumns($("orderId").substring(0, 17).as("ts"))//添加字段:hudi表中分区字段,"orderTime":"2021-11-22 10:34:34.136" -> 2021-11-22.addColumns($("orderTime").substring(0, 10).as("partition_day"));tableEnvironment.createTemporaryView("view_order",table);//4.创建输出表:将结果数据输出tableEnvironment.executeSql("select * from view_order").print();}
}

2-2 将数据输出到hudi表中

第4步创建输出表:指定了输出Hudi表路径(本地路径、Hadoop等)、表类型、数据合并字段、分组字段等,数据输出到这里

第5步将数据插入到输出Hudi表中

package cn.itcast.hudi;import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.EnvironmentSettings;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;import static org.apache.flink.table.api.Expressions.$;/*** 基于Flink SQL Connector实现:实时消费Topic中数据,转换处理后,实时存储Hudi表中*/
public class FlinkSQLHudiDemo {public static void main(String[] args) {//1.获取表执行环境StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();env.setParallelism(1);env.enableCheckpointing(5000);//由于增量将数据写入到Hudi表,所以需要启动Flink CheckPoint检查点EnvironmentSettings settings = EnvironmentSettings.newInstance().inStreamingMode() //流式.build();StreamTableEnvironment tableEnvironment = StreamTableEnvironment.create(env,settings);//2.创建输入表:从Kafka消费数据tableEnvironment.executeSql("CREATE TABLE order_kafka_source (\n" +"  orderId STRING,\n" +"  userId STRING,\n" +"  orderTime STRING,\n" +"  ip STRING,\n" +"  orderMoney DOUBLE,\n" +"  orderStatus INT\n" +") WITH (\n" +"  'connector' = 'kafka',\n" +"  'topic' = 'order-topic',\n" +"  'properties.bootstrap.servers' = 'node1.itcast.cn:9099',\n" +"  'properties.group.id' = 'gid-1001',\n" +"  'scan.startup.mode' = 'latest-offset',\n" +"  'format' = 'json',\n" +"  'json.fail-on-missing-field' = 'false',\n" +"  'json.ignore-parse-errors' = 'true'\n" +")");//3.转换数据:可以使用SQL,也可以是Table apiTable table = tableEnvironment.from("order_kafka_source")//添加字段:hudi表数据合并字段,"orderId":"20211122103434136000001" -> 20211122103434136.addColumns($("orderId").substring(0, 17).as("ts"))//添加字段:hudi表中分区字段,"orderTime":"2021-11-22 10:34:34.136" -> 2021-11-22.addColumns($("orderTime").substring(0, 10).as("partition_day"));tableEnvironment.createTemporaryView("view_order", table);//4.创建输出表:将数据输出到hudi表中tableEnvironment.executeSql("CREATE TABLE order_hudi_sink (\n" +"  orderId STRING PRIMARY KEY NOT ENFORCED,\n" +"  userId STRING,\n" +"  orderTime STRING,\n" +"  ip STRING,\n" +"  orderMoney DOUBLE,\n" +"  orderStatus INT,\n" +"  ts STRING,\n" +"  partition_day STRING\n" +")\n" +"PARTITIONED BY (partition_day) \n" +"WITH (\n" +"  'connector' = 'hudi',\n" +"  'path' = 'file:///D:/flink_hudi_order',\n" +"  'table.type' = 'MERGE_ON_READ',\n" +"  'write.operation' = 'upsert',\n" +"  'hoodie.datasource.write.recordkey.field' = 'orderId',\n" +"  'write.precombine.field' = 'ts',\n" +"  'write.tasks'= '1'\n" +")");// 5.通过子查询方式,将数据写入输出表(注意,字段顺序要一致)tableEnvironment.executeSql("INSERT INTO order_hudi_sink\n" +"SELECT\n" +"  orderId, userId, orderTime, ip, orderMoney, orderStatus, ts, partition_day\n" +"FROM view_order");}
}

2-3 从hudi表中加载数据

创建输入表,加载Hudi表查询数据即可。

package cn.itcast.hudi;import org.apache.flink.table.api.EnvironmentSettings;
import org.apache.flink.table.api.TableEnvironment;/*** 基于Flink SQL Connector实现:从Hudi表中加载数据,编写SQL查询*/
public class FlinkSQLReadDemo {public static void main(String[] args) {//1.获取表执行环境EnvironmentSettings settings = EnvironmentSettings.newInstance().inStreamingMode().build();TableEnvironment tableEnvironment = TableEnvironment.create(settings);//2.创建输入表,加载Hudi表查询数据tableEnvironment.executeSql("CREATE TABLE order_hudi(\n" +"  orderId STRING PRIMARY KEY NOT ENFORCED,\n" +"  userId STRING,\n" +"  orderTime STRING,\n" +"  ip STRING,\n" +"  orderMoney DOUBLE,\n" +"  orderStatus INT,\n" +"  ts STRING,\n" +"  partition_day STRING\n" +")\n" +"PARTITIONED BY (partition_day)\n" +"WITH (\n" +"  'connector' = 'hudi',\n" +"  'path' = 'file:///D:/flink_hudi_order',\n" +"  'table.type' = 'MERGE_ON_READ',\n" +"  'read.streaming.enabled' = 'true',\n" +"  'read.streaming.check-interval' = '4'\n" +")");//3.执行查询语句,流式读取Hudi数据tableEnvironment.executeSql("SELECT orderId, userId, orderTime, ip, orderMoney, orderStatus, ts ,partition_day FROM order_hudi").print();}
}

附:依赖

1.Hudi+Spark+Kafka依赖

<repositories><repository><id>aliyun</id><url>http://maven.aliyun.com/nexus/content/groups/public/</url></repository><repository><id>cloudera</id><url>https://repository.cloudera.com/artifactory/cloudera-repos/</url></repository><repository><id>jboss</id><url>http://repository.jboss.com/nexus/content/groups/public</url></repository>
</repositories><properties><scala.version>2.12.10</scala.version><scala.binary.version>2.12</scala.binary.version><spark.version>3.0.0</spark.version><hadoop.version>2.7.3</hadoop.version><hudi.version>0.9.0</hudi.version>
</properties><dependencies><!-- 依赖Scala语言 --><dependency><groupId>org.scala-lang</groupId><artifactId>scala-library</artifactId><version>${scala.version}</version></dependency><!-- Spark Core 依赖 --><dependency><groupId>org.apache.spark</groupId><artifactId>spark-core_${scala.binary.version}</artifactId><version>${spark.version}</version></dependency><!-- Spark SQL 依赖 --><dependency><groupId>org.apache.spark</groupId><artifactId>spark-sql_${scala.binary.version}</artifactId><version>${spark.version}</version></dependency><!-- Structured Streaming + Kafka  依赖 --><dependency><groupId>org.apache.spark</groupId><artifactId>spark-sql-kafka-0-10_${scala.binary.version}</artifactId><version>${spark.version}</version></dependency><!-- Hadoop Client 依赖 --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-client</artifactId><version>${hadoop.version}</version></dependency><!-- hudi-spark3 --><dependency><groupId>org.apache.hudi</groupId><artifactId>hudi-spark3-bundle_2.12</artifactId><version>${hudi.version}</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-avro_2.12</artifactId><version>${spark.version}</version></dependency><!-- Spark SQL 与 Hive 集成 依赖 --><dependency><groupId>org.apache.spark</groupId><artifactId>spark-hive_${scala.binary.version}</artifactId><version>${spark.version}</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-hive-thriftserver_${scala.binary.version}</artifactId><version>${spark.version}</version></dependency><dependency><groupId>org.apache.httpcomponents</groupId><artifactId>httpcore</artifactId><version>4.4.13</version></dependency><dependency><groupId>org.apache.httpcomponents</groupId><artifactId>httpclient</artifactId><version>4.5.12</version></dependency></dependencies><build><outputDirectory>target/classes</outputDirectory><testOutputDirectory>target/test-classes</testOutputDirectory><resources><resource><directory>${project.basedir}/src/main/resources</directory></resource></resources><!-- Maven 编译的插件 --><plugins><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-compiler-plugin</artifactId><version>3.0</version><configuration><source>1.8</source><target>1.8</target><encoding>UTF-8</encoding></configuration></plugin><plugin><groupId>net.alchim31.maven</groupId><artifactId>scala-maven-plugin</artifactId><version>3.2.0</version><executions><execution><goals><goal>compile</goal><goal>testCompile</goal></goals></execution></executions></plugin></plugins>
</build>

2.Hudi+Flink+Kafka依赖

<repositories><repository><id>nexus-aliyun</id><name>Nexus aliyun</name><url>http://maven.aliyun.com/nexus/content/groups/public</url></repository><repository><id>central_maven</id><name>central maven</name><url>https://repo1.maven.org/maven2</url></repository><repository><id>cloudera</id><url>https://repository.cloudera.com/artifactory/cloudera-repos/</url></repository><repository><id>apache.snapshots</id><name>Apache Development Snapshot Repository</name><url>https://repository.apache.org/content/repositories/snapshots/</url><releases><enabled>false</enabled></releases><snapshots><enabled>true</enabled></snapshots></repository>
</repositories><properties><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><maven.compiler.source>${java.version}</maven.compiler.source><maven.compiler.target>${java.version}</maven.compiler.target><java.version>1.8</java.version><scala.binary.version>2.12</scala.binary.version><flink.version>1.12.2</flink.version><hadoop.version>2.7.3</hadoop.version><mysql.version>8.0.16</mysql.version>
</properties><dependencies><!-- Flink Client --><dependency><groupId>org.apache.flink</groupId><artifactId>flink-clients_${scala.binary.version}</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-java</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-streaming-java_${scala.binary.version}</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-runtime-web_${scala.binary.version}</artifactId><version>${flink.version}</version></dependency><!-- Flink Table API & SQL --><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-common</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-planner-blink_${scala.binary.version}</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-api-java-bridge_${scala.binary.version}</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-connector-kafka_${scala.binary.version}</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-json</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.hudi</groupId><artifactId>hudi-flink-bundle_${scala.binary.version}</artifactId><version>0.9.0</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-shaded-hadoop-2-uber</artifactId><version>2.7.5-10.0</version></dependency><!-- MySQL/FastJson/lombok --><dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>${mysql.version}</version></dependency><dependency><groupId>com.alibaba</groupId><artifactId>fastjson</artifactId><version>1.2.68</version></dependency><dependency><groupId>org.projectlombok</groupId><artifactId>lombok</artifactId><version>1.18.12</version></dependency><!-- slf4j及log4j --><dependency><groupId>org.slf4j</groupId><artifactId>slf4j-log4j12</artifactId><version>1.7.7</version><scope>runtime</scope></dependency><dependency><groupId>log4j</groupId><artifactId>log4j</artifactId><version>1.2.17</version><scope>runtime</scope></dependency></dependencies><build><sourceDirectory>src/main/java</sourceDirectory><testSourceDirectory>src/test/java</testSourceDirectory><plugins><!-- 编译插件 --><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-compiler-plugin</artifactId><version>3.5.1</version><configuration><source>1.8</source><target>1.8</target><!--<encoding>${project.build.sourceEncoding}</encoding>--></configuration></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-surefire-plugin</artifactId><version>2.18.1</version><configuration><useFile>false</useFile><disableXmlReport>true</disableXmlReport><includes><include>**/*Test.*</include><include>**/*Suite.*</include></includes></configuration></plugin><!-- 打jar包插件(会包含所有依赖) --><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-shade-plugin</artifactId><version>2.3</version><executions><execution><phase>package</phase><goals><goal>shade</goal></goals><configuration><filters><filter><artifact>*:*</artifact><excludes><exclude>META-INF/*.SF</exclude><exclude>META-INF/*.DSA</exclude><exclude>META-INF/*.RSA</exclude></excludes></filter></filters><transformers><transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"><!-- <mainClass>com.itcast.flink.batch.FlinkBatchWordCount</mainClass> --></transformer></transformers></configuration></execution></executions></plugin></plugins>
</build>

附:报错

1.运行报错

【报错代码】

Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

【原因】

windows下运行时需要安装Windows下运行的支持插件:hadoop2.7-common-bin

网址:https://gitcode.net/mirrors/cdarlint/winutils?utm_source=csdn_github_accelerator

选择需要版本的包下载,配置环境变量HADOOP_HOME和path,重启idea再运行就不会报错了

cd hudi/server/hadoop

./bin/hadoop checknative

2.运行报错

【报错】

NoSuchFieldError: INSTANCE

【原因】

由于代码中的httpclient和httpcore版本过高, 而hadoop中的版本过低导致(<4.3)

【解决】

将&HADOOP_HOME/share/hadoop/common/lib 下和 &HADOOP_HOME/share/hadoop/tools/lib/下的httpclient和httpcore替换成高版本(>4.3)

cd /home/zhangheng/hudi/server/hadoop/share/hadoop/common/lib
rm httpclient-4.2.5.jar
rm httpcore-4.2.5.jar
cd /home/zhangheng/hudi/server/hadoop/share/hadoop/tools/lib
rm httpclient-4.2.5.jar
rm httpcore-4.2.5.jar
scp -r D:\Users\zh\Desktop\Hudi\compressedPackage\httpclient-4.4.jar zhangheng@10.8.4.212:/home/zhangheng/hudi/server/hadoop/share/hadoop/common/lib
scp -r D:\Users\zh\Desktop\Hudi\compressedPackage\httpcore-4.4.jar zhangheng@10.8.4.212:/home/zhangheng/hudi/server/hadoop/share/hadoop/common/lib
scp -r D:\Users\zh\Desktop\Hudi\compressedPackage\httpclient-4.4.jar zhangheng@10.8.4.212:/home/zhangheng/hudi/server/hadoop/share/hadoop/tools/lib
scp -r D:\Users\zh\Desktop\Hudi\compressedPackage\httpcore-4.4.jar zhangheng@10.8.4.212:/home/zhangheng/hudi/server/hadoop/share/hadoop/tools/lib

3.运行警告

【警告】

WARN ProcfsMetricsGetter: Exception when trying to compute pagesize, as a result reporting of ProcessTree metrics is stopped

【原因】

spark版本太高,最开始选的spark版本为v3.0.0,但是不太合适,改成v2.4.6,就ok了。

【解决】

官方网址:https://archive.apache.org/dist/spark/spark-2.4.6/
下载安装配置环境变量:spark-2.4.6-bin-hadoop2.7.tgz   

附:配置

1.Scala配置

1.Windows安装Scala:https://www.scala-lang.org/
安装完成后配置环境变量SCALA_HOME、path
输入scala -version查看是否安装成功
2.idea安装Scala插件:plugins搜索scala直接安装
重启之后,找到file(工具)——>project structure,找到左下角Glob libararies,然后点击中间 + 号,选择最后一个 Scala SDK,找到自己安装scala的版本,点击ok即可

2.idea中虚拟机配置

Tools -> Deployment -> Browse Remote Host
配置自己虚拟机的SSH configuration、Root path、Web server URL。

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/824355.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

中科亿海微-CL1656功能验证开发板

I. 引言 A. 研究背景与意义 CL1656是一款精度高、功耗低、成本低的5V单片低功耗运放&#xff0c;由核心互联公司研发制造&#xff0c;CL1656 是一个 16-bit、快速、低功耗逐次逼近型 ADC&#xff0c;吞吐速率高达 250 kSPS&#xff0c;并且内置低噪声、宽 带宽采样保持放大器。…

HarmonyOS开发实例:【分布式新闻客户端】

介绍 本篇Codelab基于栅格布局、设备管理和多端协同&#xff0c;实现一次开发&#xff0c;多端部署的分布式新闻客户端页面。主要包含以下功能&#xff1a; 展示新闻列表以及左右滑动切换新闻Tab。点击新闻展示新闻详情页。点击新闻详情页底部的分享按钮&#xff0c;发现周边…

Elasticsearch:如何将 MongoDB 数据引入 Elastic Cloud

作者&#xff1a;Hemendra Singh Lodhi Elastic Cloud 是由 Elastic 提供的基于云的托管服务。Elastic Cloud 允许客户在亚马逊网络服务 (AWS)、谷歌云平台 (GCP) 和微软 Azure 上部署、管理和扩展他们的 Elasticsearch 集群。 MongoDB 是一种流行的 NoSQL 文档导向数据库&am…

web安全学习笔记(10)

记一下第十四节课的内容。 一、MySQL学习 数据库基本结构&#xff1a;库——表——列——值 在本地打开navicat&#xff0c;连接数据库&#xff0c;新建一个liuyan库、liuyan库下新建一个member表&#xff1a; 在表里随意添加一些数据&#xff1a; 下面我们学习MySQL查询。新…

【Web】NewStarCTF 2022 题解(全)

目录 Week1 HTTP Head?Header! 我真的会谢 NotPHP Word-For-You Week2 Word-For-You(2 Gen) IncludeOne UnserializeOne ezAPI Week3 BabySSTI_One multiSQL IncludeTwo Maybe You Have To think More Week4 So Baby RCE BabySSTI_Two UnserializeT…

C++修炼之路之STL_stack,queue和容器适配器

目录 前言 一&#xff1a;SLT中stack和queue的基本使用 1.在官网中对stack和queue的简单介绍 2.数据结构中栈和队列的基本知识和操作 3. STL中stack的接口函数及使用 4.STL中queue的接口函数及使用 二&#xff1a;容器适配器Container 三&#xff1a;使用容器适配器…

springboot Logback 不同环境,配置不同的日志输出路径

1.背景&#xff1a; mac 笔记本开发&#xff0c;日志文件写到/data/logs/下&#xff0c;控制台报出&#xff1a;Failed to create parent directories for [/data/logs/........... 再去手动在命令窗口创建文件夹data&#xff0c;报Read-only file system 2.修改logback-spri…

Linux安装和使用Android Debug Bridge(ADB)

目录 1、开发环境和工具 2、ADB是什么&#xff1f; 3、安装ADB 3.1、使用包管理器安装 ADB 3.2、手动安装 ADB 4、使用ADB 4.1、连接设备 4.2、执行shell命令 4.3、安装应用程序 4.4、截取屏幕截图 4.5、模拟按键和手势 4.6、上传文件到Android设备 4.7、从Android设备下载文件…

BGP边界网关路由实验(华为)

一&#xff0c;技术简介 BGP&#xff08;边界网关路由协议&#xff09;是一种自治系统&#xff08;AS&#xff09;间的协议&#xff0c;主要用于在不同的AS之间交换路由信息。AS是一个由一组网络设备和路由器组成的网络集合&#xff0c;这些设备可以在一个共同的管理域中协同工…

1 回归:锂电池温度预测top2 代码部分(一) Tabnet

2024 iFLYTEK A.I.开发者大赛-讯飞开放平台 TabNet&#xff1a; 模型也是我在这个比赛一个意外收获&#xff0c;这个模型在比赛之中可用。但是需要GPU资源&#xff0c;否则运行真的是太慢了。后面针对这个模型我会写出如何使用的方法策略。 比赛结束后有与其他两位选手聊天&am…

win2022服务器apache配置https(ssl)真实环境实验(避坑之作)不依赖宝塔小皮等集成环境

本次实验背景&#xff1a; 完全参考官方 https://cloud.tencent.com/document/product/400/4143 文档流程&#xff0c;没有搞定&#xff0c;于是写下避坑之作。 服务器&#xff1a;腾讯云轻量应用服务器 操作系统&#xff1a; Windows Server 2022 DataCenter 64bit CN apache…

李沐45_SSD实现——自学笔记

主体思路&#xff1a; 1.生成一堆锚框 2.根据真实标签为每个锚框打标(类别、偏移、mask) 3.模型为每个锚框做一个预测(类别、偏移) 4.计算上述二者的差异损失&#xff0c;以更新模型weights 先读取一张图像。 它的高度和宽度分别为561和728像素。 %matplotlib inline import …

Photoshop 2024 (ps) v25.6中文 强大的图像处理软件 mac/win

Photoshop 2024 for Mac是一款强大的图像处理软件&#xff0c;专为Mac用户设计。它继承了Adobe Photoshop一贯的优秀功能&#xff0c;并进一步提升了性能和稳定性。 Mac版Photoshop 2024 (ps)v25.6中文激活版下载 win版Photoshop 2024 (ps)v25.6直装版下载 无论是专业的设计师还…

EI Scopus双检索 | 2024年清洁能源与智能电网国际会议(CCESG 2024)

会议简介 Brief Introduction 2024年清洁能源与智能电网国际会议(CCESG 2024) 会议时间&#xff1a;2024年 11月27-29日 召开地点&#xff1a;澳大利亚悉尼 大会官网&#xff1a;CCESG 2024-2024 International Joint Conference on Clean Energy and Smart Grid 由CoreShare科…

m4p转换mp3格式怎么转?3个Mac端应用~

M4P文件格式的诞生伴随着苹果公司引入FairPlay版权管理系统&#xff0c;该系统旨在保护音频的内容。M4P因此而生&#xff0c;成为受到FairPlay系统保护的音频格式&#xff0c;常见于苹果设备的iTunes等平台。 MP3文件格式的多个优点 MP3格式的优点显而易见。首先&#xff0c;其…

k8s之etcd

1.特点&#xff1a; etcd 是云原生架构中重要的基础组件。有如下特点&#xff1a; 简单&#xff1a;安装配置简单&#xff0c;而且提供了 HTTP API 进行交互&#xff0c;使用也很简单键值对存储&#xff1a;将数据存储在分层组织的目录中&#xff0c;如同在标准文件系统中监…

vscode msvc qt环境搭建

自己整了好久都没把环境搞好&#xff0c;后来发现已经有大佬搞好了插件&#xff0c;完全不需要自己整理。 下载如下插件&#xff1a; 第二个qt插件就可以自动帮我们生成工程了。 可惜目前似乎支持win&#xff0c;另外就是debug模式运行后会报qwindowsd.dll插件找不到的错误&a…

【简单讲解下如何用爬虫玩转石墨文档】

&#x1f3a5;博主&#xff1a;程序员不想YY啊 &#x1f4ab;CSDN优质创作者&#xff0c;CSDN实力新星&#xff0c;CSDN博客专家 &#x1f917;点赞&#x1f388;收藏⭐再看&#x1f4ab;养成习惯 ✨希望本文对您有所裨益&#xff0c;如有不足之处&#xff0c;欢迎在评论区提出…

链表OJ - 6(链表分割)

题目描述&#xff08;来源&#xff09; 现有一链表的头指针 ListNode* pHead&#xff0c;给一定值x&#xff0c;编写一段代码将所有小于x的结点排在其余结点之前&#xff0c;且不能改变原来的数据顺序&#xff0c;返回重新排列后的链表的头指针。 思路 创建两个链表&#xff0c…

ChatGPT:引领未来的语言模型革命?

一、引言 随着人工智能技术的不断发展&#xff0c;Chat GPT作为一种自然语言处理技术&#xff0c;已经逐渐渗透到各个领域&#xff0c;具有广泛的应用前景。本文将从多个角度探讨Chat GPT的应用领域及其未来发展趋势。 ChatGPT的语言处理能力超越了以往任何一款人工智能产品。…