Unrecognized Hadoop major version number: 3.0.0-cdh6.3.2

 一.环境描述

spark提交job到yarn报错,业务代码比较简单,通过接口调用获取数据,将数据通过sparksql将数据写入hive中,尝试各种替换hadoop版本,最后拿下

1.hadoop环境

2.项目 pom.xml

spark-submit \
--name GridCorrelationMain \
--master yarn \
--deploy-mode cluster \
--executor-cores 2 \
--executor-memory 4G \
--num-executors 5 \
--driver-memory 2G \
--class cn.zd.maincode.wangge.GridCorrelationMain \
/home/boeadm/zwj/iot/cp-etl-spark-data/target/cp_zhengda_spark_utils-1.0-SNAPSHOT.jareyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHAiOjE2OTI0MzU5NjgsImlhdCI6MTY5MjM0OTU2Mywic3ViIjo1MjB9.rCmnhF2EhdzH62T7lP3nmxQSxh17PotscxEcZkjL5hk<dependencies><dependency><groupId>org.apache.commons</groupId><artifactId>commons-configuration2</artifactId><version>2.9.0</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-core_2.11</artifactId><version>2.3.3</version><exclusions><exclusion><artifactId>hadoop-client</artifactId><groupId>org.apache.hadoop</groupId></exclusion><exclusion><artifactId>slf4j-log4j12</artifactId><groupId>org.slf4j</groupId></exclusion></exclusions></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-sql_2.11</artifactId><version>2.3.3</version><!--<scope>provided</scope>--><!-- <exclusions><exclusion><groupId>com.google.guava</groupId><artifactId>guava</artifactId></exclusion></exclusions>--></dependency><!--<dependency><groupId>com.google.guava</groupId><artifactId>guava</artifactId><version>15.0</version></dependency>
--><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-common</artifactId><version>${hadoop.version}</version><exclusions><exclusion><groupId>commons-codec</groupId><artifactId>commons-codec</artifactId></exclusion><exclusion><groupId>commons-httpclient</groupId><artifactId>commons-httpclient</artifactId></exclusion><!--          <exclusion><groupId>com.google.guava</groupId><artifactId>guava</artifactId></exclusion>--></exclusions><!--<scope>provided</scope>--></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-client</artifactId><version>${hadoop.version}</version><exclusions><exclusion><artifactId>hadoop-common</artifactId><groupId>org.apache.hadoop</groupId></exclusion></exclusions></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-hdfs</artifactId><version>${hadoop.version}</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-hive_2.11</artifactId><version>2.3.2</version><exclusions><exclusion><artifactId>hive-exec</artifactId><groupId>org.spark-project.hive</groupId></exclusion><exclusion><artifactId>hive-metastore</artifactId><groupId>org.spark-project.hive</groupId></exclusion></exclusions></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-mapreduce-client-core</artifactId><version>${hadoop.version}</version></dependency><dependency><groupId>org.apache.hive</groupId><artifactId>hive-jdbc</artifactId><exclusions><exclusion><groupId>org.eclipse.jetty.aggregate</groupId><artifactId>jetty-all</artifactId></exclusion><exclusion><groupId>org.apache.hive</groupId><artifactId>hive-shims</artifactId></exclusion><exclusion><artifactId>hbase-mapreduce</artifactId><groupId>org.apache.hbase</groupId></exclusion><exclusion><artifactId>hbase-server</artifactId><groupId>org.apache.hbase</groupId></exclusion><exclusion><artifactId>log4j-slf4j-impl</artifactId><groupId>org.apache.logging.log4j</groupId></exclusion><exclusion><artifactId>slf4j-log4j12</artifactId><groupId>org.slf4j</groupId></exclusion></exclusions><version>2.1.1</version></dependency><!--服务验证相关依赖--><dependency><groupId>org.apache.httpcomponents</groupId><artifactId>httpclient</artifactId><version>4.5.13</version><exclusions><exclusion><groupId>commons-codec</groupId><artifactId>commons-codec</artifactId></exclusion></exclusions><!--<scope>provided</scope>--></dependency><!--本地跑的话 需要这个jar--><dependency><groupId>commons-codec</groupId><artifactId>commons-codec</artifactId><version>1.15</version><!--<scope>provided</scope>--></dependency><dependency><groupId>com.typesafe</groupId><artifactId>config</artifactId><version>1.3.1</version></dependency><!-- https://mvnrepository.com/artifact/com.alibaba/fastjson --><dependency><groupId>com.alibaba</groupId><artifactId>fastjson</artifactId><version>1.2.62</version></dependency><dependency><groupId>com.alibaba</groupId><artifactId>fastjson</artifactId><version>${fastjson.version}</version></dependency><!-- https://mvnrepository.com/artifact/org.json/json --><dependency><groupId>org.json</groupId><artifactId>json</artifactId><version>20160810</version></dependency><dependency><groupId>com.github.qlone</groupId><artifactId>retrofit-crawler</artifactId><version>1.0.0</version></dependency><dependency><groupId>com.oracle.database.jdbc</groupId><artifactId>ojdbc8</artifactId><version>12.2.0.1</version></dependency><!--mysql连接--><dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>5.1.40</version></dependency><dependency><groupId>javax.mail</groupId><artifactId>javax.mail-api</artifactId><version>1.5.6</version></dependency><dependency><groupId>org.apache.commons</groupId><artifactId>commons-email</artifactId><version>1.4</version></dependency></dependencies>

3.项目集群提交报错


        at org.apache.spark.sql.catalyst.catalog.SessionCatalog.lookupRelation(SessionCatalog.scala:696)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:730)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:685)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:715)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:708)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:89)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsUp(AnalysisHelper.scala:86)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$1.apply(AnalysisHelper.scala:87)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$1.apply(AnalysisHelper.scala:87)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:87)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsUp(AnalysisHelper.scala:86)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$1.apply(AnalysisHelper.scala:87)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$1.apply(AnalysisHelper.scala:87)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:87)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsUp(AnalysisHelper.scala:86)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:708)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:654)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
        at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
        at scala.collection.immutable.List.foldLeft(List.scala:84)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
        at scala.collection.immutable.List.foreach(List.scala:392)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
        at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)
        at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)
        at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
        at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
        at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
        at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
        at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:78)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:651)
        at cn.zd.maincode.wangge.GridCorrelationMain$.createDataFrameAndTempView(GridCorrelationMain.scala:264)
        at cn.zd.maincode.wangge.GridCorrelationMain$.horecaGridInfo(GridCorrelationMain.scala:148)
        at cn.zd.maincode.wangge.GridCorrelationMain$.main(GridCorrelationMain.scala:110)
        at cn.zd.maincode.wangge.GridCorrelationMain.main(GridCorrelationMain.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:673)
Caused by: java.lang.ExceptionInInitializerError
        at org.apache.hadoop.hive.conf.HiveConf.<clinit>(HiveConf.java:105)
        at org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:153)
        at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:118)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:292)
        at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:395)
        at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:284)
        at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:68)
        at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:67)
        at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:217)
        at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:217)
        at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:217)
        at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99)
        ... 72 more
Caused by: java.lang.IllegalArgumentException: Unrecognized Hadoop major version number: 3.0.0-cdh6.3.2
        at org.apache.hadoop.hive.shims.ShimLoader.getMajorVersion(ShimLoader.java:169)
        at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:134)
        at org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:95)
        at org.apache.hadoop.hive.conf.HiveConf$ConfVars.<clinit>(HiveConf.java:354)
        ... 88 more

End of LogType:stderr

4.最终解决方式

 将相关依赖不打进包中

   <dependency><groupId>org.apache.hive</groupId><artifactId>hive-jdbc</artifactId><exclusions><exclusion><groupId>org.eclipse.jetty.aggregate</groupId><artifactId>jetty-all</artifactId></exclusion><exclusion><groupId>org.apache.hive</groupId><artifactId>hive-shims</artifactId></exclusion><exclusion><artifactId>hbase-mapreduce</artifactId><groupId>org.apache.hbase</groupId></exclusion><exclusion><artifactId>hbase-server</artifactId><groupId>org.apache.hbase</groupId></exclusion><exclusion><artifactId>log4j-slf4j-impl</artifactId><groupId>org.apache.logging.log4j</groupId></exclusion><exclusion><artifactId>slf4j-log4j12</artifactId><groupId>org.slf4j</groupId></exclusion></exclusions><version>2.1.1</version></dependency><!--服务验证相关依赖--><dependency><groupId>org.apache.httpcomponents</groupId><artifactId>httpclient</artifactId><version>4.5.13</version><exclusions><exclusion><groupId>commons-codec</groupId><artifactId>commons-codec</artifactId></exclusion></exclusions><!--<scope>provided</scope>--></dependency><!--本地跑的话 需要这个jar--><dependency><groupId>commons-codec</groupId><artifactId>commons-codec</artifactId><version>1.15</version><!--<scope>provided</scope>--></dependency><dependency><groupId>com.typesafe</groupId><artifactId>config</artifactId><version>1.3.1</version></dependency><!-- https://mvnrepository.com/artifact/com.alibaba/fastjson --><dependency><groupId>com.alibaba</groupId><artifactId>fastjson</artifactId><version>1.2.62</version></dependency><dependency><groupId>com.alibaba</groupId><artifactId>fastjson</artifactId><version>${fastjson.version}</version></dependency><!-- https://mvnrepository.com/artifact/org.json/json --><dependency><groupId>org.json</groupId><artifactId>json</artifactId><version>20160810</version></dependency><dependency><groupId>com.github.qlone</groupId><artifactId>retrofit-crawler</artifactId><version>1.0.0</version></dependency><dependency><groupId>com.oracle.database.jdbc</groupId><artifactId>ojdbc8</artifactId><version>12.2.0.1</version></dependency><!--mysql连接--><dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>5.1.40</version></dependency><!--10月31日 新取消-->
<!--        <dependency><groupId>com.google.guava</groupId><artifactId>guava</artifactId><version>28.0-jre</version></dependency>--><!-- https://mvnrepository.com/artifact/org.apache.directory.studio/org.apache.commons.codec --><!-- https://mvnrepository.com/artifact/org.apache.commons/org.apache.commons.codec --><!--邮件发送依赖--><dependency><groupId>javax.mail</groupId><artifactId>javax.mail-api</artifactId><version>1.5.6</version></dependency><dependency><groupId>org.apache.commons</groupId><artifactId>commons-email</artifactId><version>1.4</version></dependency><!--<dependency><groupId>org.scala-lang</groupId><artifactId>scala-library</artifactId><version>2.11.2</version></dependency><dependency><groupId>org.scala-lang</groupId><artifactId>scala-reflect</artifactId><version>2.11.2</version></dependency><dependency><groupId>org.scala-lang</groupId><artifactId>scala-compiler</artifactId><version>2.11.2</version></dependency>--><!--        <dependency>-->
<!--            <groupId>com.starrocks</groupId>-->
<!--            <artifactId>starrocks-spark2_2.11</artifactId>-->
<!--            <version>1.0.1</version>-->
<!--        </dependency>--></dependencies>

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/47496.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

数据结构算法--4堆排序

堆排序过程: >建立堆(大根堆) >得到堆顶元素&#xff0c;为最大元素 >去掉堆顶&#xff0c;将堆最后一个元素放到堆顶&#xff0c;此时可通过一次调整使堆重新有序 >堆顶元素为第二大元素 >重复步骤3&#xff0c;直到堆变空 此时是建立堆后的大根堆模型 将…

ThinkPHP6.0+ 使用Redis 原始用法

composer 安装 predis/predis 依赖&#xff0c;或者安装php_redis.dll的扩展。 我这里选择的是predis/predis 依赖。 composer require predis/predis 进入config/cache.php 配置添加redis缓存支持 示例&#xff1a; <?php// -----------------------------------------…

国内常见的几款可视化Web组态软件

组态软件是一种用于控制和监控各种设备的软件&#xff0c;也是指在自动控制系统监控层一级的软件平台和开发环境。这类软件实际上也是一种通过灵活的组态方式&#xff0c;为用户提供快速构建工业自动控制系统监控功能的、通用层次的软件工具。通常用于工业控制&#xff0c;自动…

基于百度文心大模型创作的实践与谈论

文心概念 百度文心大模型源于产业、服务于产业&#xff0c;是产业级知识增强大模型。百度通过大模型与国产深度学习框架融合发展&#xff0c;打造了自主创新的AI底座&#xff0c;大幅降低了AI开发和应用的门槛&#xff0c;满足真实场景中的应用需求&#xff0c;真正发挥大模型…

PostMan 测试项目是否支持跨域

使用PostMan可以方便快速的进行跨域测试。 只需要在请求头中手动添加一个Origin的标头&#xff0c;声明需要跨域跨到的域&#xff08;IP&#xff1a;端口&#xff09;就行&#xff0c;其余参数PostMan会自动生成。添加此标头后&#xff0c;请求会被做为一条跨域的请求来进行处…

抖音短视频SEO矩阵系统源码开发

一、概述 抖音短视频SEO矩阵系统源码是一项综合技术&#xff0c;旨在帮助用户在抖音平台上创建并优化短视频内容。本文将详细介绍该系统的技术架构、核心代码、实现过程以及优化建议&#xff0c;以便读者更好地理解并应用这项技术。 二、技术架构 抖音短视频SEO矩阵系统采用前…

STM32 中断复习

中断 打断CPU执行正常的程序&#xff0c;转而处理紧急程序&#xff0c;然后返回原暂停的程序继续运行&#xff0c;就叫中断。 在确定时间内对相应事件作出响应&#xff0c;如&#xff1a;温度监控&#xff08;定时器中断&#xff09;。故障处理&#xff0c;检测到故障&#x…

07-微信小程序-注册页面-模块化

07-微信小程序-注册页面 文章目录 注册页面使用 Page 构造器注册页面参数Object初始数据案例代码 生命周期回调函数组件事件处理函数setData()案例代码 生命周期模块化 注册页面 对于小程序中的每个页面&#xff0c;都需要在页面对应的 js 文件中进行注册&#xff0c;指定页面…

解决Windows下的docker desktop无法启动问题

以管理员权限运行cmd 报错&#xff1a; docker: error during connect: Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.40/containers/create: open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows,…

哪些人适合参加大数据培训班?

互联网加速职场变革&#xff0c;大数据浪潮席卷全球。日前&#xff0c;Python、大数据、人工智能是当今最热门的话题。大数据存储、大数据分析、 人工智能等开发人才需求旺盛。 大数据培训班有大数据分析培训班、大数据开发培训班&#xff0c;JAVA培训班 大数据班适学人群…

ios小组件报错:Please adopt containerBackground API

iOS 17 小组件报错:Please adopt containerBackground API 使用下面的方法解决了: 代码: extension View {func widgetBackground(_ backgroundView: some View) -> some View {if #available(iOSApplicationExtension 17.0, *) {return containerBackground(for: .wi…

axios 各种方式的请求 示例

GET请求 示例一&#xff1a; 服务端代码 GetMapping("/f11") public String f11(Integer pageNum, Integer pageSize) {return pageNum " : " pageSize; }前端代码 <template><div class"home"><button click"getFun1…

DP读书:鲲鹏处理器 架构与编程(六)PCI Express 总线

处理器与服务器&#xff1a;PCI Express 总线 PCI Express 总线1. PCI Express 总线的特点a. 高速差分传输b. 串行传输c. 全双工端到端连接d. 基于多通道的数据传输方式e. 基于数据包的传输 2. PCI Express 总线的组成与拓扑结构a. 根复合体b. PCI Express桥c. 功能单元 3. PCI…

【正点原子STM32连载】第十九章 通用定时器输入捕获实验 摘自【正点原子】APM32F407最小系统板使用指南

1&#xff09;实验平台&#xff1a;正点原子stm32f103战舰开发板V4 2&#xff09;平台购买地址&#xff1a;https://detail.tmall.com/item.htm?id609294757420 3&#xff09;全套实验源码手册视频下载地址&#xff1a; http://www.openedv.com/thread-340252-1-1.html# 第十…

hive lag() 和lead()函数

LAG 和 LEAD函数简介 Hive 中的 LAG 和 LEAD 函数时&#xff0c;通常用于在结果集中获取同一列在前一行&#xff08;LAG&#xff09;或后一行&#xff08;LEAD&#xff09;的值。这在分析时间序列数据、计算变化率或查找趋势时非常有用。以下是这两个函数的用法示例&#xff1…

2023七夕小程序

又是一年七夕节 往年七夕小程序 2020 https://blog.csdn.net/chen_227/article/details/107062998 2022 视频 QiXi2022 代码 https://gitee.com/chen227/qixi2022-qt-qml 2023 效果 代码 https://gitee.com/chen227/qixi2023-qt-qml

Android Studio中引入MagicIndicator

1.github中下载文件 GitHub - hackware1993/MagicIndicator: A powerful, customizable and extensible ViewPager indicator framework. As the best alternative of ViewPagerIndicator, TabLayout and PagerSlidingTabStrip —— 强大、可定制、易扩展的 ViewPager 指示器框…

【大模型AIGC系列课程 1-2】创建并部署自己的ChatGPT机器人

OpenAI API 调用 获取 openai api api-key https://platform.openai.com/account/api-keys 利用 python requests 请求 openai 参考 openai 接口说明:https://platform.openai.com/docs/api-reference/chat/create import json # 导入json包 import requests # 导入req…

TCP特点UDP编程

目录 1、tcp协议和udp协议 2、多线程并发和多进程并发&#xff1a; &#xff08;1&#xff09;多进程并发服务端 &#xff08;2&#xff09;多进程并发客户端&#xff1a; 3、tcp: 4、粘包 5、UDP协议编程流程 (1)服务器端&#xff1a; (2)客户端&#xff1a; 6、tcp状…

Java请求Http接口-hutool的HttpUtil(超详细-附带工具类)

概述 HttpUtil是应对简单场景下Http请求的工具类封装&#xff0c;此工具封装了HttpRequest对象常用操作&#xff0c;可以保证在一个方法之内完成Http请求。 此模块基于JDK的HttpUrlConnection封装完成&#xff0c;完整支持https、代理和文件上传。 导包 <dependency>&…