通过源代码修改使 Apache Hudi 支持 Kerberos 访问 Hive 的功能

Hudi 0.10.0 Kerberos-support 适配文档

文档说明

本文档主要用于阐释如何基于 Hudi 0.10.0 添加支持 Kerberos 认证权限的功能。

主要贡献:

  1. 针对正在使用的 Hudi 源代码进行 Kerberos-support 功能扩展,总修改规模囊括了 12 个文件约 20 处代码共计 约 200 行代码;
  2. 对 Hudi 0.10.0 的源代码进行了在保持所有自定义特性的基础上,支持了基于 Kerberos 权限认证并同步 Hive 表的功能;
  3. 对此类工作大致的工作思路进行主要流程的汇总并予以实例

主要思路及操作如下所示:

  1. 根据博客 将hudi同步到配置kerberos的hive3 中的阐述添加 Kerberos 认证功能并检验其可行性
  2. 根据目前未被 Merge 到 Main Branch 的 Hudi 官方的代码 PR Hudi-2402 和 xiaozhch5 的分支 基于本地分支进行代码比对、梳理和修改
  3. 添加了从 Hudi 表同步 Hive 表时支持 Kerberos 的功能及相应各配置项
  4. 根据修改后的代码添加了 pom 文件中的各种依赖关系及配置项

分支目录

如下为进行修改的分支编号。查看 commit 内容并找到自定义分支与 Hudi 0.10.1 主分支的 commit 时间线差距。

可以定位到当前所在 commit 的 hashcode_id 为 4c65ca544,而 Hudi 0.10.1 主分支的 hashcode_id 为 84fb390e4


commit 4c65ca544b91e828462419bbc12e116bfe1dbc2c (origin/0.10.1-release-hive3-kerberos-enabled)
Author: xiaozhch5 <xiaozhch5@mail2.sysu.edu.cn>
Date:   Wed Mar 2 00:15:05 2022 +0800新增krb5.conf文件路径,模式使用/etc/krb5.confcommit 116352beb2e028357d0ffca385dd2f11a9cef72b
Author: xiaozhch5 <xiaozhch5@mail2.sysu.edu.cn>
Date:   Tue Mar 1 23:30:08 2022 +0800添加编译命令commit ffc26256ba4cbb52ea653551ea88d109bc26e315
Author: xiaozhch5 <xiaozhch5@mail2.sysu.edu.cn>
Date:   Tue Mar 1 23:14:21 2022 +0800适配hdp3.1.4编译,解决找不到包的问题commit fbc53aa29e63dc5b097a3014d05f6b82cfcf2a70
Author: xiaozhch5 <xiaozhch5@mail2.sysu.edu.cn>
Date:   Tue Mar 1 22:20:03 2022 +0800[MINOR] Remove org.apache.directory.api.util.Strings importcommit 05fee3608d17abbd0217818a6bf02e4ead8f6de8
Author: xiaozhch5 <xiaozhch5@mail2.sysu.edu.cn>
Date:   Tue Mar 1 21:07:34 2022 +0800添加flink引擎支持将hudi同步到配置kerberos的hive3 metastore,仅针对Flink 1.13引擎,其他引擎未修改commit 84fb390e42cbbb72d1aaf4cf8f44cd6fba049595 (tag: release-0.10.1, origin/release-0.10.1)
Author: sivabalan <n.siva.b@gmail.com>
Date:   Tue Jan 25 20:15:31 2022 -0500[MINOR] Update release version to reflect published version 0.10.1

对比两版本之间的修改信息

在定位到 commit 版本差异后,使用 git diff 命令比对代码内容差异,并将对比结果输出至文件。

具体命令为 git diff 4c65ca544 84fb390e4 >> commit.diff

diff --git a/compile-command.sh b/compile-command.sh
deleted file mode 100644
index c5536c86c..000000000
--- a/compile-command.sh
+++ /dev/null
@@ -1,9 +0,0 @@
-mvn clean install -DskipTests \
--Dhadoop.version=3.1.1.3.1.4.0-315 \
--Dhive.version=3.1.0.3.1.4.0-315 \
--Dscala.version=2.12.10 \
--Dscala.binary.version=2.12 \
--Dspark.version=3.0.1 \
--Dflink.version=1.13.5 \
--Pflink-bundle-shade-hive3 \
--Pspark3
\ No newline at end of file
diff --git a/hudi-aws/pom.xml b/hudi-aws/pom.xml
index 8c7f6dc73..d853690c0 100644
--- a/hudi-aws/pom.xml
+++ b/hudi-aws/pom.xml
@@ -116,12 +116,6 @@<artifactId>mockito-junit-jupiter</artifactId><scope>test</scope></dependency>
-
-        <dependency>
-            <groupId>com.google.code.findbugs</groupId>
-            <artifactId>jsr305</artifactId>
-            <version>3.0.0</version>
-        </dependency></dependencies><build>
diff --git a/hudi-common/src/test/java/org/apache/hudi/common/testutils/FileCreateUtils.java b/hudi-common/src/test/java/org/apache/hudi/common/testutils/FileCreateUtils.java
index b7d6adf38..1968ef422 100644
--- a/hudi-common/src/test/java/org/apache/hudi/common/testutils/FileCreateUtils.java
+++ b/hudi-common/src/test/java/org/apache/hudi/common/testutils/FileCreateUtils.java
@@ -19,6 +19,7 @@package org.apache.hudi.common.testutils;+import org.apache.directory.api.util.Strings;import org.apache.hudi.avro.model.HoodieCleanMetadata;import org.apache.hudi.avro.model.HoodieCleanerPlan;import org.apache.hudi.avro.model.HoodieCompactionPlan;
@@ -72,8 +73,6 @@ public class FileCreateUtils {private static final String WRITE_TOKEN = "1-0-1";private static final String BASE_FILE_EXTENSION = HoodieTableConfig.BASE_FILE_FORMAT.defaultValue().getFileExtension();
-  /** An empty byte array */
-  public static final byte[] EMPTY_BYTES = new byte[0];public static String baseFileName(String instantTime, String fileId) {return baseFileName(instantTime, fileId, BASE_FILE_EXTENSION);
@@ -222,7 +221,7 @@ public class FileCreateUtils {}public static void createCleanFile(String basePath, String instantTime, HoodieCleanMetadata metadata, boolean isEmpty) throws IOException {
-    createMetaFile(basePath, instantTime, HoodieTimeline.CLEAN_EXTENSION, isEmpty ? EMPTY_BYTES : serializeCleanMetadata(metadata).get());
+    createMetaFile(basePath, instantTime, HoodieTimeline.CLEAN_EXTENSION, isEmpty ? Strings.EMPTY_BYTES : serializeCleanMetadata(metadata).get());}public static void createRequestedCleanFile(String basePath, String instantTime, HoodieCleanerPlan cleanerPlan) throws IOException {
@@ -230,7 +229,7 @@ public class FileCreateUtils {}public static void createRequestedCleanFile(String basePath, String instantTime, HoodieCleanerPlan cleanerPlan, boolean isEmpty) throws IOException {
-    createMetaFile(basePath, instantTime, HoodieTimeline.REQUESTED_CLEAN_EXTENSION, isEmpty ? EMPTY_BYTES : serializeCleanerPlan(cleanerPlan).get());
+    createMetaFile(basePath, instantTime, HoodieTimeline.REQUESTED_CLEAN_EXTENSION, isEmpty ? Strings.EMPTY_BYTES : serializeCleanerPlan(cleanerPlan).get());}public static void createInflightCleanFile(String basePath, String instantTime, HoodieCleanerPlan cleanerPlan) throws IOException {
@@ -238,7 +237,7 @@ public class FileCreateUtils {}public static void createInflightCleanFile(String basePath, String instantTime, HoodieCleanerPlan cleanerPlan, boolean isEmpty) throws IOException {
-    createMetaFile(basePath, instantTime, HoodieTimeline.INFLIGHT_CLEAN_EXTENSION, isEmpty ? EMPTY_BYTES : serializeCleanerPlan(cleanerPlan).get());
+    createMetaFile(basePath, instantTime, HoodieTimeline.INFLIGHT_CLEAN_EXTENSION, isEmpty ? Strings.EMPTY_BYTES : serializeCleanerPlan(cleanerPlan).get());}public static void createRequestedRollbackFile(String basePath, String instantTime, HoodieRollbackPlan plan) throws IOException {
@@ -250,7 +249,7 @@ public class FileCreateUtils {}public static void createRollbackFile(String basePath, String instantTime, HoodieRollbackMetadata hoodieRollbackMetadata, boolean isEmpty) throws IOException {
-    createMetaFile(basePath, instantTime, HoodieTimeline.ROLLBACK_EXTENSION, isEmpty ? EMPTY_BYTES : serializeRollbackMetadata(hoodieRollbackMetadata).get());
+    createMetaFile(basePath, instantTime, HoodieTimeline.ROLLBACK_EXTENSION, isEmpty ? Strings.EMPTY_BYTES : serializeRollbackMetadata(hoodieRollbackMetadata).get());}public static void createRestoreFile(String basePath, String instantTime, HoodieRestoreMetadata hoodieRestoreMetadata) throws IOException {
diff --git a/hudi-flink/src/main/java/org/apache/hudi/configuration/FlinkOptions.java b/hudi-flink/src/main/java/org/apache/hudi/configuration/FlinkOptions.java
index 621aea3d2..77c3f15e5 100644
--- a/hudi-flink/src/main/java/org/apache/hudi/configuration/FlinkOptions.java
+++ b/hudi-flink/src/main/java/org/apache/hudi/configuration/FlinkOptions.java
@@ -653,36 +653,6 @@ public class FlinkOptions extends HoodieConfig {.withDescription("INT64 with original type TIMESTAMP_MICROS is converted to hive timestamp type.\n"+ "Disabled by default for backward compatibility.");-  public static final ConfigOption<Boolean> HIVE_SYNC_KERBEROS_ENABLE = ConfigOptions
-      .key("hive_sync.kerberos.enable")
-      .booleanType()
-      .defaultValue(false)
-      .withDescription("Whether hive is configured with kerberos");
-
-  public static final ConfigOption<String> HIVE_SYNC_KERBEROS_KRB5CONF = ConfigOptions
-      .key("hive_sync.kerberos.krb5.conf")
-      .stringType()
-      .defaultValue("")
-      .withDescription("kerberos krb5.conf file path");
-
-  public static final ConfigOption<String> HIVE_SYNC_KERBEROS_PRINCIPAL = ConfigOptions
-      .key("hive_sync.kerberos.principal")
-      .stringType()
-      .defaultValue("")
-      .withDescription("hive metastore kerberos principal");
-
-  public static final ConfigOption<String> HIVE_SYNC_KERBEROS_KEYTAB_FILE = ConfigOptions
-      .key("hive_sync.kerberos.keytab.file")
-      .stringType()
-      .defaultValue("")
-      .withDescription("Hive metastore keytab file path");
-
-  public static final ConfigOption<String> HIVE_SYNC_KERBEROS_KEYTAB_NAME = ConfigOptions
-      .key("hive_sync.kerberos.keytab.name")
-      .stringType()
-      .defaultValue("")
-      .withDescription("Hive metastore keytab file name");
-// -------------------------------------------------------------------------//  Utilities// -------------------------------------------------------------------------
diff --git a/hudi-flink/src/main/java/org/apache/hudi/sink/utils/HiveSyncContext.java b/hudi-flink/src/main/java/org/apache/hudi/sink/utils/HiveSyncContext.java
index bedc20f9b..1c051c8cd 100644
--- a/hudi-flink/src/main/java/org/apache/hudi/sink/utils/HiveSyncContext.java
+++ b/hudi-flink/src/main/java/org/apache/hudi/sink/utils/HiveSyncContext.java
@@ -86,11 +86,6 @@ public class HiveSyncContext {hiveSyncConfig.skipROSuffix = conf.getBoolean(FlinkOptions.HIVE_SYNC_SKIP_RO_SUFFIX);hiveSyncConfig.assumeDatePartitioning = conf.getBoolean(FlinkOptions.HIVE_SYNC_ASSUME_DATE_PARTITION);hiveSyncConfig.withOperationField = conf.getBoolean(FlinkOptions.CHANGELOG_ENABLED);
-    hiveSyncConfig.enableKerberos = conf.getBoolean(FlinkOptions.HIVE_SYNC_KERBEROS_ENABLE);
-    hiveSyncConfig.krb5Conf = conf.getString(FlinkOptions.HIVE_SYNC_KERBEROS_KRB5CONF);
-    hiveSyncConfig.principal = conf.getString(FlinkOptions.HIVE_SYNC_KERBEROS_PRINCIPAL);
-    hiveSyncConfig.keytabFile = conf.getString(FlinkOptions.HIVE_SYNC_KERBEROS_KEYTAB_FILE);
-    hiveSyncConfig.keytabName = conf.getString(FlinkOptions.HIVE_SYNC_KERBEROS_KEYTAB_NAME);return hiveSyncConfig;}}
diff --git a/hudi-hadoop-mr/pom.xml b/hudi-hadoop-mr/pom.xml
index ef0ea945a..7283d74f0 100644
--- a/hudi-hadoop-mr/pom.xml
+++ b/hudi-hadoop-mr/pom.xml
@@ -67,17 +67,6 @@<dependency><groupId>${hive.groupid}</groupId><artifactId>hive-jdbc</artifactId>
-      <exclusions>
-        <exclusion>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-      <version>${hadoop.version}</version></dependency><dependency><groupId>${hive.groupid}</groupId>
diff --git a/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncConfig.java b/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncConfig.java
index 2701820b8..9b6385120 100644
--- a/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncConfig.java
+++ b/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncConfig.java
@@ -123,21 +123,6 @@ public class HiveSyncConfig implements Serializable {@Parameter(names = {"--conditional-sync"}, description = "If true, only sync on conditions like schema change or partition change.")public Boolean isConditionalSync = false;-  @Parameter(names = {"--enable-kerberos"}, description = "Whether hive configs kerberos")
-  public Boolean enableKerberos = false;
-
-  @Parameter(names = {"--krb5-conf"}, description = "krb5.conf file path")
-  public String krb5Conf = "/etc/krb5.conf";
-
-  @Parameter(names = {"--principal"}, description = "hive metastore principal")
-  public String principal = "hive/_HOST@EXAMPLE.COM";
-
-  @Parameter(names = {"--keytab-file"}, description = "hive metastore keytab file path")
-  public String keytabFile;
-
-  @Parameter(names = {"--keytab-name"}, description = "hive metastore keytab name")
-  public String keytabName;
-// enhance the similar function in child classpublic static HiveSyncConfig copy(HiveSyncConfig cfg) {HiveSyncConfig newConfig = new HiveSyncConfig();
@@ -162,11 +147,6 @@ public class HiveSyncConfig implements Serializable {newConfig.sparkSchemaLengthThreshold = cfg.sparkSchemaLengthThreshold;newConfig.withOperationField = cfg.withOperationField;newConfig.isConditionalSync = cfg.isConditionalSync;
-    newConfig.enableKerberos = cfg.enableKerberos;
-    newConfig.krb5Conf = cfg.krb5Conf;
-    newConfig.principal = cfg.principal;
-    newConfig.keytabFile = cfg.keytabFile;
-    newConfig.keytabName = cfg.keytabName;return newConfig;}@@ -199,11 +179,6 @@ public class HiveSyncConfig implements Serializable {+ ", sparkSchemaLengthThreshold=" + sparkSchemaLengthThreshold+ ", withOperationField=" + withOperationField+ ", isConditionalSync=" + isConditionalSync
-      + ", enableKerberos=" + enableKerberos
-      + ", krb5Conf=" + krb5Conf
-      + ", principal=" + principal
-      + ", keytabFile=" + keytabFile
-      + ", keytabName=" + keytabName+ '}';}}
diff --git a/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncTool.java b/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncTool.java
index 56553f1ed..b37b28ed2 100644
--- a/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncTool.java
+++ b/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncTool.java
@@ -23,7 +23,6 @@ import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.FileSystem;import org.apache.hadoop.hive.conf.HiveConf;import org.apache.hadoop.hive.metastore.api.Partition;
-import org.apache.hadoop.security.UserGroupInformation;import org.apache.hudi.common.fs.FSUtils;import org.apache.hudi.common.model.HoodieFileFormat;import org.apache.hudi.common.model.HoodieTableType;
@@ -44,7 +43,6 @@ import org.apache.parquet.schema.MessageType;import org.apache.parquet.schema.PrimitiveType;import org.apache.parquet.schema.Type;-import java.io.IOException;import java.util.ArrayList;import java.util.HashMap;import java.util.List;
@@ -77,20 +75,8 @@ public class HiveSyncTool extends AbstractSyncTool {super(configuration.getAllProperties(), fs);try {
-      if (cfg.enableKerberos) {
-        System.setProperty("java.security.krb5.conf", cfg.krb5Conf);
-        Configuration conf = new Configuration();
-        conf.set("hadoop.security.authentication", "kerberos");
-        conf.set("kerberos.principal", cfg.principal);
-        UserGroupInformation.setConfiguration(conf);
-        UserGroupInformation.loginUserFromKeytab(cfg.keytabName, cfg.keytabFile);
-        configuration.set(HiveConf.ConfVars.METASTORE_USE_THRIFT_SASL.varname, "true");
-        configuration.set(HiveConf.ConfVars.METASTORE_KERBEROS_PRINCIPAL.varname, cfg.principal);
-        configuration.set(HiveConf.ConfVars.METASTORE_KERBEROS_KEYTAB_FILE.varname, cfg.keytabFile);
-      }
-this.hoodieHiveClient = new HoodieHiveClient(cfg, configuration, fs);
-    } catch (RuntimeException | IOException e) {
+    } catch (RuntimeException e) {if (cfg.ignoreExceptions) {LOG.error("Got runtime exception when hive syncing, but continuing as ignoreExceptions config is set ", e);} else {
diff --git a/hudi-utilities/pom.xml b/hudi-utilities/pom.xml
index ad32458d2..474e0499d 100644
--- a/hudi-utilities/pom.xml
+++ b/hudi-utilities/pom.xml
@@ -352,18 +352,8 @@<groupId>org.eclipse.jetty.orbit</groupId><artifactId>javax.servlet</artifactId></exclusion>
-        <exclusion>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-        </exclusion></exclusions></dependency>
-
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-      <version>${hadoop.version}</version>
-    </dependency><dependency><groupId>${hive.groupid}</groupId><artifactId>hive-service</artifactId>
diff --git a/packaging/hudi-flink-bundle/pom.xml b/packaging/hudi-flink-bundle/pom.xml
index c4ee23017..640c71d68 100644
--- a/packaging/hudi-flink-bundle/pom.xml
+++ b/packaging/hudi-flink-bundle/pom.xml
@@ -138,7 +138,6 @@<include>org.apache.hive:hive-service-rpc</include><include>org.apache.hive:hive-exec</include><include>org.apache.hive:hive-metastore</include>
-                  <include>org.apache.hive:hive-standalone-metastore</include><include>org.apache.hive:hive-jdbc</include><include>org.datanucleus:datanucleus-core</include><include>org.datanucleus:datanucleus-api-jdo</include>
@@ -445,7 +444,6 @@<groupId>${hive.groupid}</groupId><artifactId>hive-exec</artifactId><version>${hive.version}</version>
-      <scope>${flink.bundle.hive.scope}</scope></dependency><dependency><groupId>${hive.groupid}</groupId>
@@ -489,17 +487,8 @@<groupId>org.eclipse.jetty</groupId><artifactId>*</artifactId></exclusion>
-        <exclusion>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-        </exclusion></exclusions></dependency>
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-      <version>${hadoop.version}</version>
-    </dependency><dependency><groupId>${hive.groupid}</groupId><artifactId>hive-common</artifactId>
@@ -694,12 +683,6 @@<version>${hive.version}</version><scope>${flink.bundle.hive.scope}</scope></dependency>
-        <dependency>
-          <groupId>org.apache.hive</groupId>
-          <artifactId>hive-standalone-metastore</artifactId>
-          <version>${hive.version}</version>
-          <scope>${flink.bundle.hive.scope}</scope>
-        </dependency></dependencies></profile></profiles>
diff --git a/packaging/hudi-integ-test-bundle/pom.xml b/packaging/hudi-integ-test-bundle/pom.xml
index ee2605de3..30704c8c9 100644
--- a/packaging/hudi-integ-test-bundle/pom.xml
+++ b/packaging/hudi-integ-test-bundle/pom.xml
@@ -408,10 +408,6 @@<groupId>org.pentaho</groupId><artifactId>*</artifactId></exclusion>
-        <exclusion>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-        </exclusion></exclusions></dependency>@@ -428,19 +424,9 @@<groupId>javax.servlet</groupId><artifactId>servlet-api</artifactId></exclusion>
-        <exclusion>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-        </exclusion></exclusions></dependency>-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-      <version>${hadoop.version}</version>
-    </dependency>
-<dependency><groupId>${hive.groupid}</groupId><artifactId>hive-common</artifactId>
diff --git a/packaging/hudi-kafka-connect-bundle/pom.xml b/packaging/hudi-kafka-connect-bundle/pom.xml
index d2cc84df7..bf395a411 100644
--- a/packaging/hudi-kafka-connect-bundle/pom.xml
+++ b/packaging/hudi-kafka-connect-bundle/pom.xml
@@ -306,17 +306,6 @@<artifactId>hive-jdbc</artifactId><version>${hive.version}</version><scope>${utilities.bundle.hive.scope}</scope>
-            <exclusions>
-                <exclusion>
-                    <groupId>org.apache.hadoop</groupId>
-                    <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-                </exclusion>
-            </exclusions>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.hadoop</groupId>
-            <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-            <version>${hadoop.version}</version></dependency><dependency>
diff --git a/packaging/hudi-spark-bundle/pom.xml b/packaging/hudi-spark-bundle/pom.xml
index 44f424540..d8d1a1d2d 100644
--- a/packaging/hudi-spark-bundle/pom.xml
+++ b/packaging/hudi-spark-bundle/pom.xml
@@ -289,17 +289,6 @@<artifactId>hive-jdbc</artifactId><version>${hive.version}</version><scope>${spark.bundle.hive.scope}</scope>
-      <exclusions>
-        <exclusion>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-      <version>${hadoop.version}</version></dependency><dependency>
diff --git a/packaging/hudi-utilities-bundle/pom.xml b/packaging/hudi-utilities-bundle/pom.xml
index 9384c4f01..360e8c7f1 100644
--- a/packaging/hudi-utilities-bundle/pom.xml
+++ b/packaging/hudi-utilities-bundle/pom.xml
@@ -308,18 +308,6 @@<artifactId>hive-jdbc</artifactId><version>${hive.version}</version><scope>${utilities.bundle.hive.scope}</scope>
-      <exclusions>
-        <exclusion>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
-      <version>${hadoop.version}</version></dependency><dependency>
diff --git a/pom.xml b/pom.xml
index 36aed4785..470f7db2d 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1164,10 +1164,6 @@<id>confluent</id><url>https://packages.confluent.io/maven/</url></repository>
-    <repository>
-      <id>hdp</id>
-      <url>https://repo.hortonworks.com/content/repositories/releases/</url>
-    </repository></repositories><profiles>

代码阅读及理解

针对上述进行修改的各个 java 文件及相应类或方法,详细阅读并理解其整体代码结构及思维逻辑,并针对改动理解其改动意义。

针对正在使用的 Hudi 版本进行代码修改

针对我们正在使用的 Hudi 源代码进行 Kerberos-support 功能扩展,总修改规模囊括了 12 个文件约 20 处代码共计 约 200 行代码。

具体修改结果如下:

diff --git a/README.md b/README.md
index 2b32591..f31070b 100644
--- a/README.md
+++ b/README.md
@@ -17,6 +17,17 @@# 寮€鍙戞棩蹇? 
+## November/28th/2022
+
+1. 鏍规嵁鍗氬 [灏唄udi鍚屾鍒伴厤缃甼erberos鐨刪ive3](https://cloud.tencent.com/developer/article/1949358) 涓殑闃愯堪娣诲姞 Kerberos 璁よ瘉鍔熻兘骞舵楠屽叾鍙鎬?+2. 鏍规嵁鐩墠鏈 Merge 鍒?Main Branch 鐨?Hudi 瀹樻柟鐨勪唬鐮?PR [Hudi-2402](https://github.com/apache/hudi/pull/3771) 鍜?[xiaozhch5 鐨勫垎鏀痌(https://github.com/xiaozhch5/hudi/tree/0.10.1-release-hive3-kerberos-enabled) 鍩轰簬鏈湴鍒嗘敮杩涜浠g爜姣斿銆佹⒊鐞嗗拰淇敼
+3. 娣诲姞浜嗕粠 Hudi 琛ㄥ悓姝?Hive 琛ㄦ椂鏀寔 Kerberos 鐨勫姛鑳藉強鐩稿簲鍚勯厤缃」
+4. 鏍规嵁淇敼鍚庣殑浠g爜娣诲姞浜?pom 鏂囦欢涓殑鍚勭渚濊禆鍏崇郴鍙婇厤缃」
+
+Ps: 鏈浣跨敤鐨勭紪璇戝懡浠や负 `mvn clean install -^DskipTests -^Dcheckstyle.skip=true -^Dmaven.test.skip=true -^DskipITs -^Dhadoop.version=3.0.0-cdh6.3.2 -^Dhive.version=3.1.2 -^Dscala.version=2.12.10 -^Dscala.binary.version=2.12 -^Dflink.version=1.13.2 -^Pflink-bundle-shade-hive3`
+
+// Ps: 鏈浣跨敤鐨勭紪璇戝懡浠や负 `mvn clean install -^DskipTests -^Dmaven.test.skip=true -^DskipITs -^Dcheckstyle.skip=true -^Drat.skip=true -^Dhadoop.version=3.0.0-cdh6.3.2  -^Pflink-bundle-shade-hive2 -^Dscala-2.12 -^Pspark-shade-unbundle-avro`
+## August/2nd/20221. 淇敼浜?Hudi 涓?Flink 鏁版嵁娌夐檷鐩稿叧鐨勪富瑕佽繍琛屾祦绋嬶紝骞舵坊鍔犱簡璇稿鐢ㄤ簬杈呭姪鏂板姛鑳藉疄鐜扮殑绫诲睘鎬с€佺被鏂规硶鍜屽姛鑳藉嚱鏁帮紱
diff --git a/hudi-aws/pom.xml b/hudi-aws/pom.xml
index 34114fc..636b29c 100644
--- a/hudi-aws/pom.xml
+++ b/hudi-aws/pom.xml
@@ -116,6 +116,13 @@<artifactId>mockito-junit-jupiter</artifactId><scope>test</scope></dependency>
+
+        <dependency>
+            <groupId>com.google.code.findbugs</groupId>
+            <artifactId>jsr305</artifactId>
+            <version>3.0.0</version>
+        </dependency>
+</dependencies><build>
diff --git a/hudi-flink/src/main/java/org/apache/hudi/configuration/FlinkOptions.java b/hudi-flink/src/main/java/org/apache/hudi/configuration/FlinkOptions.java
index e704a34..413e9ed 100644
--- a/hudi-flink/src/main/java/org/apache/hudi/configuration/FlinkOptions.java
+++ b/hudi-flink/src/main/java/org/apache/hudi/configuration/FlinkOptions.java
@@ -653,6 +653,40 @@ public class FlinkOptions extends HoodieConfig {.withDescription("INT64 with original type TIMESTAMP_MICROS is converted to hive timestamp type.\n"+ "Disabled by default for backward compatibility.");+  // ------------------------------------------------------------------------
+  //  Kerberos Related Options
+  // ------------------------------------------------------------------------
+
+  public static final ConfigOption<Boolean> HIVE_SYNC_KERBEROS_ENABLE = ConfigOptions
+          .key("hive_sync.kerberos.enable")
+          .booleanType()
+          .defaultValue(false)
+          .withDescription("Whether hive is configured with kerberos");
+
+  public static final ConfigOption<String> HIVE_SYNC_KERBEROS_KRB5CONF = ConfigOptions
+          .key("hive_sync.kerberos.krb5.conf")
+          .stringType()
+          .defaultValue("")
+          .withDescription("kerberos krb5.conf file path");
+
+  public static final ConfigOption<String> HIVE_SYNC_KERBEROS_PRINCIPAL = ConfigOptions
+          .key("hive_sync.kerberos.principal")
+          .stringType()
+          .defaultValue("")
+          .withDescription("hive metastore kerberos principal");
+
+  public static final ConfigOption<String> HIVE_SYNC_KERBEROS_KEYTAB_FILE = ConfigOptions
+          .key("hive_sync.kerberos.keytab.file")
+          .stringType()
+          .defaultValue("")
+          .withDescription("Hive metastore keytab file path");
+
+  public static final ConfigOption<String> HIVE_SYNC_KERBEROS_KEYTAB_NAME = ConfigOptions
+          .key("hive_sync.kerberos.keytab.name")
+          .stringType()
+          .defaultValue("")
+          .withDescription("Hive metastore keytab file name");
+// ------------------------------------------------------------------------//  Custom Flush related logic// ------------------------------------------------------------------------
diff --git a/hudi-flink/src/main/java/org/apache/hudi/sink/utils/HiveSyncContext.java b/hudi-flink/src/main/java/org/apache/hudi/sink/utils/HiveSyncContext.java
index 1c051c8..a1e1da3 100644
--- a/hudi-flink/src/main/java/org/apache/hudi/sink/utils/HiveSyncContext.java
+++ b/hudi-flink/src/main/java/org/apache/hudi/sink/utils/HiveSyncContext.java
@@ -86,6 +86,13 @@ public class HiveSyncContext {hiveSyncConfig.skipROSuffix = conf.getBoolean(FlinkOptions.HIVE_SYNC_SKIP_RO_SUFFIX);hiveSyncConfig.assumeDatePartitioning = conf.getBoolean(FlinkOptions.HIVE_SYNC_ASSUME_DATE_PARTITION);hiveSyncConfig.withOperationField = conf.getBoolean(FlinkOptions.CHANGELOG_ENABLED);
+    // Kerberos Related Configurations
+    hiveSyncConfig.enableKerberos = conf.getBoolean(FlinkOptions.HIVE_SYNC_KERBEROS_ENABLE);
+    hiveSyncConfig.krb5Conf = conf.getString(FlinkOptions.HIVE_SYNC_KERBEROS_KRB5CONF);
+    hiveSyncConfig.principal = conf.getString(FlinkOptions.HIVE_SYNC_KERBEROS_PRINCIPAL);
+    hiveSyncConfig.keytabFile = conf.getString(FlinkOptions.HIVE_SYNC_KERBEROS_KEYTAB_FILE);
+    hiveSyncConfig.keytabName = conf.getString(FlinkOptions.HIVE_SYNC_KERBEROS_KEYTAB_NAME);
+    // Kerberos Configs ENDreturn hiveSyncConfig;}}
diff --git a/hudi-hadoop-mr/pom.xml b/hudi-hadoop-mr/pom.xml
index df2a23b..e61dbd4 100644
--- a/hudi-hadoop-mr/pom.xml
+++ b/hudi-hadoop-mr/pom.xml
@@ -67,6 +67,17 @@<dependency><groupId>${hive.groupid}</groupId><artifactId>hive-jdbc</artifactId>
+      <exclusions>
+        <exclusion>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
+      <version>${hadoop.version}</version></dependency><dependency><groupId>${hive.groupid}</groupId>
diff --git a/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncConfig.java b/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncConfig.java
index 9b63851..624300f 100644
--- a/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncConfig.java
+++ b/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncConfig.java
@@ -123,6 +123,22 @@ public class HiveSyncConfig implements Serializable {@Parameter(names = {"--conditional-sync"}, description = "If true, only sync on conditions like schema change or partition change.")public Boolean isConditionalSync = false;+  // Kerberos Related Configuration
+  @Parameter(names = {"--enable-kerberos"}, description = "Whether hive configs kerberos")
+  public Boolean enableKerberos = false;
+
+  @Parameter(names = {"--krb5-conf"}, description = "krb5.conf file path")
+  public String krb5Conf = "/etc/krb5.conf";
+
+  @Parameter(names = {"--principal"}, description = "hive metastore principal")
+  public String principal = "hive/_HOST@EXAMPLE.COM";
+
+  @Parameter(names = {"--keytab-file"}, description = "hive metastore keytab file path")
+  public String keytabFile;
+
+  @Parameter(names = {"--keytab-name"}, description = "hive metastore keytab name")
+  public String keytabName;
+// enhance the similar function in child classpublic static HiveSyncConfig copy(HiveSyncConfig cfg) {HiveSyncConfig newConfig = new HiveSyncConfig();
@@ -147,6 +163,13 @@ public class HiveSyncConfig implements Serializable {newConfig.sparkSchemaLengthThreshold = cfg.sparkSchemaLengthThreshold;newConfig.withOperationField = cfg.withOperationField;newConfig.isConditionalSync = cfg.isConditionalSync;
+    // Kerberos Related Configs
+    newConfig.enableKerberos = cfg.enableKerberos;
+    newConfig.krb5Conf = cfg.krb5Conf;
+    newConfig.principal = cfg.principal;
+    newConfig.keytabFile = cfg.keytabFile;
+    newConfig.keytabName = cfg.keytabName;
+    // Kerberos Related Configs ENDreturn newConfig;}diff --git a/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncTool.java b/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncTool.java
index 3bbaee1..2fa0e86 100644
--- a/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncTool.java
+++ b/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncTool.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.FileSystem;import org.apache.hadoop.hive.conf.HiveConf;import org.apache.hadoop.hive.metastore.api.Partition;
+import org.apache.hadoop.security.UserGroupInformation;import org.apache.log4j.LogManager;import org.apache.log4j.Logger;import org.apache.parquet.schema.GroupType;
@@ -45,6 +46,7 @@ import org.apache.parquet.schema.MessageType;import org.apache.parquet.schema.PrimitiveType;import org.apache.parquet.schema.Type;+import java.io.IOException;import java.util.ArrayList;import java.util.HashMap;import java.util.List;
@@ -77,8 +79,23 @@ public class HiveSyncTool extends AbstractSyncTool {super(configuration.getAllProperties(), fs);try {
+
+      // Start Kerberos Processing Logic
+      if (cfg.enableKerberos) {
+        System.setProperty("java.security.krb5.conf", cfg.krb5Conf);
+        Configuration conf = new Configuration();
+        conf.set("hadoop.security.authentication", "kerberos");
+        conf.set("kerberos.principal", cfg.principal);
+        UserGroupInformation.setConfiguration(conf);
+        UserGroupInformation.loginUserFromKeytab(cfg.keytabName, cfg.keytabFile);
+        configuration.set(HiveConf.ConfVars.METASTORE_USE_THRIFT_SASL.varname, "true");
+        configuration.set(HiveConf.ConfVars.METASTORE_KERBEROS_PRINCIPAL.varname, cfg.principal);
+        configuration.set(HiveConf.ConfVars.METASTORE_KERBEROS_KEYTAB_FILE.varname, cfg.keytabFile);
+      }
+this.hoodieHiveClient = new HoodieHiveClient(cfg, configuration, fs);
-    } catch (RuntimeException e) {
+    } catch (RuntimeException | IOException e) {
+      // Support IOException eif (cfg.ignoreExceptions) {LOG.error("Got runtime exception when hive syncing, but continuing as ignoreExceptions config is set ", e);} else {
diff --git a/hudi-utilities/pom.xml b/hudi-utilities/pom.xml
index 470ad47..5b95ffb 100644
--- a/hudi-utilities/pom.xml
+++ b/hudi-utilities/pom.xml
@@ -352,8 +352,19 @@<groupId>org.eclipse.jetty.orbit</groupId><artifactId>javax.servlet</artifactId></exclusion>
+        <exclusion>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
+        </exclusion></exclusions></dependency>
+
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
+      <version>${hadoop.version}</version>
+    </dependency>
+<dependency><groupId>${hive.groupid}</groupId><artifactId>hive-service</artifactId>
diff --git a/packaging/hudi-flink-bundle/pom.xml b/packaging/hudi-flink-bundle/pom.xml
index fc8d183..27b52d3 100644
--- a/packaging/hudi-flink-bundle/pom.xml
+++ b/packaging/hudi-flink-bundle/pom.xml
@@ -139,6 +139,7 @@<include>org.apache.hive:hive-service-rpc</include><include>org.apache.hive:hive-exec</include><include>org.apache.hive:hive-metastore</include>
+                  <include>org.apache.hive:hive-standalone-metastore</include><include>org.apache.hive:hive-jdbc</include><include>org.datanucleus:datanucleus-core</include><include>org.datanucleus:datanucleus-api-jdo</include>
@@ -442,6 +443,7 @@<groupId>${hive.groupid}</groupId><artifactId>hive-exec</artifactId><version>${hive.version}</version>
+      <scope>${flink.bundle.hive.scope}</scope><exclusions><exclusion><groupId>javax.mail</groupId>
@@ -503,8 +505,17 @@<groupId>org.eclipse.jetty</groupId><artifactId>*</artifactId></exclusion>
+        <exclusion>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
+        </exclusion></exclusions></dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
+      <version>${hadoop.version}</version>
+    </dependency><dependency><groupId>${hive.groupid}</groupId><artifactId>hive-common</artifactId>
@@ -706,6 +717,12 @@<version>${hive.version}</version><scope>${flink.bundle.hive.scope}</scope></dependency>
+        <dependency>
+          <groupId>org.apache.hive</groupId>
+          <artifactId>hive-standalone-metastore</artifactId>
+          <version>${hive.version}</version>
+          <scope>${flink.bundle.hive.scope}</scope>
+        </dependency></dependencies></profile></profiles>
diff --git a/packaging/hudi-kafka-connect-bundle/pom.xml b/packaging/hudi-kafka-connect-bundle/pom.xml
index d5f90db..8d1e1a4 100644
--- a/packaging/hudi-kafka-connect-bundle/pom.xml
+++ b/packaging/hudi-kafka-connect-bundle/pom.xml
@@ -306,6 +306,18 @@<artifactId>hive-jdbc</artifactId><version>${hive.version}</version><scope>${utilities.bundle.hive.scope}</scope>
+            <exclusions>
+                <exclusion>
+                    <groupId>org.apache.hadoop</groupId>
+                    <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
+                </exclusion>
+            </exclusions>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
+            <version>${hadoop.version}</version></dependency><dependency>
diff --git a/packaging/hudi-spark-bundle/pom.xml b/packaging/hudi-spark-bundle/pom.xml
index 3544e31..8dd216f 100644
--- a/packaging/hudi-spark-bundle/pom.xml
+++ b/packaging/hudi-spark-bundle/pom.xml
@@ -293,6 +293,18 @@<artifactId>hive-jdbc</artifactId><version>${hive.version}</version><scope>${spark.bundle.hive.scope}</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
+      <version>${hadoop.version}</version></dependency><dependency>
diff --git a/packaging/hudi-utilities-bundle/pom.xml b/packaging/hudi-utilities-bundle/pom.xml
index a3da0a8..d5e944a 100644
--- a/packaging/hudi-utilities-bundle/pom.xml
+++ b/packaging/hudi-utilities-bundle/pom.xml
@@ -312,6 +312,18 @@<artifactId>hive-jdbc</artifactId><version>${hive.version}</version><scope>${utilities.bundle.hive.scope}</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
+      <version>${hadoop.version}</version></dependency><dependency>
diff --git a/pom.xml b/pom.xml
index 58f6130..ff760bb 100644
--- a/pom.xml
+++ b/pom.xml
@@ -44,20 +44,20 @@<module>hudi-timeline-service</module><module>hudi-utilities</module><module>hudi-sync</module>
-    <!--<module>packaging/hudi-hadoop-mr-bundle</module>-->
-    <!--<module>packaging/hudi-hive-sync-bundle</module>-->
-    <!--<module>packaging/hudi-spark-bundle</module>-->
-    <!--<module>packaging/hudi-presto-bundle</module>-->
-    <!--<module>packaging/hudi-utilities-bundle</module>-->
-    <!--<module>packaging/hudi-timeline-server-bundle</module>-->
-    <!--<module>docker/hoodie/hadoop</module>-->
-    <!--<module>hudi-integ-test</module>-->
-    <!--<module>packaging/hudi-integ-test-bundle</module>-->
-    <!--<module>hudi-examples</module>-->
+    <module>packaging/hudi-hadoop-mr-bundle</module>
+    <module>packaging/hudi-hive-sync-bundle</module>
+    <module>packaging/hudi-spark-bundle</module>
+    <module>packaging/hudi-presto-bundle</module>
+    <module>packaging/hudi-utilities-bundle</module>
+    <module>packaging/hudi-timeline-server-bundle</module>
+    <module>docker/hoodie/hadoop</module>
+<!--    <module>hudi-integ-test</module>-->
+<!--    <module>packaging/hudi-integ-test-bundle</module>-->
+    <module>hudi-examples</module><module>hudi-flink</module><module>hudi-kafka-connect</module><module>packaging/hudi-flink-bundle</module>
-    <!--<module>packaging/hudi-kafka-connect-bundle</module>-->
+    <module>packaging/hudi-kafka-connect-bundle</module></modules><licenses>
@@ -1084,6 +1084,10 @@<id>confluent</id><url>https://packages.confluent.io/maven/</url></repository>
+    <repository>
+      <id>hdp</id>
+      <url>https://repo.hortonworks.com/content/repositories/releases/</url>
+    </repository></repositories><profiles>

编译命令

使用如下命令进行编译,并将编译出的包放置到集群的相应位置。

编译命令如下:

mvn clean install -^DskipTests -^Dcheckstyle.skip=true -^Dmaven.test.skip=true -^DskipITs -^Dhadoop.version=3.0.0-cdh6.3.2 -^Dhive.version=3.1.2 -^Dscala.version=2.12.10 -^Dscala.binary.version=2.12 -^Dflink.version=1.13.2 -^Pflink-bundle-shade-hive3

与 Hudi, Flink 和 Hive 相关的 Jar 包如下所示。共计三个:

packaging/hudi-hive-sync-bundle/target/hudi-hive-sync-bundle-0.10.0.jar
packaging/hudi-hadoop-mr-bundle/target/hudi-hadoop-mr-bundle-0.10.0.jar
packaging/hudi-flink-bundle/target/hudi-flink-bundle_2.12-0.10.0.jar

环境部署

分别放置于集群环境的如下位置:

  1. 于 $HIVE_HOME/auxlib 中放置 hudi-hive-sync-bundle-0.10.0.jarhudi-hadoop-mr-bundle-0.10.0.jar
cd $HIVE_HOME/auxlib
ls ./
hudi-hive-sync-bundle-0.10.0.jar
hudi-hadoop-mr-bundle-0.10.0.jar
  1. 于 $FLINK_HOME/lib 中放置 hudi-flink-bundle_2.12-0.10.0.jar
cd  $FLINK_HOME/lib
ls ./
...
hudi-flink-bundle_2.12-0.10.0.jar
...

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/13268.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

RabbitMQ 教程 | RabbitMQ 入门

&#x1f468;&#x1f3fb;‍&#x1f4bb; 热爱摄影的程序员 &#x1f468;&#x1f3fb;‍&#x1f3a8; 喜欢编码的设计师 &#x1f9d5;&#x1f3fb; 擅长设计的剪辑师 &#x1f9d1;&#x1f3fb;‍&#x1f3eb; 一位高冷无情的编码爱好者 大家好&#xff0c;我是 DevO…

动态线程池问题的解决

项目中需要将线程池也监控管理起来。 于是决定引入了hippo4j&#xff0c;这个引入很简单&#xff0c;官方的例子也很简单&#xff0c;拿过来直接跑。 出现问题了&#xff0c;用的和例子一模一样的&#xff0c;也没什么错&#xff0c;但是就是在服务器的管理控制台上没有找到动态…

[SQL系列] 从头开始学PostgreSQL 借鉴MYSQL的隔离级别

SQL 的隔离级别是指在数据库中&#xff0c;事务之间相互隔离的程度。当事务 A 修改了某条数据后&#xff0c;如果事务 B 在这个时候读取该数据&#xff0c;会发生什么情况呢&#xff1f;这取决于数据库的隔离级别设置。 常用的事务隔离级别类型包括以下几种&#xff1a; READ …

实现langchain-ChatGLM API调用客户端(及未解决的问题)

langchain-ChatGLM是一个基于本地知识库的LLM对话库。其基于text2vec-large-Chinese为Embedding模型&#xff0c;ChatGLM-6B为对话大模型。原项目地址&#xff1a;https://github.com/chatchat-space/langchain-ChatGLM 对于如何本地部署ChatGLM模型&#xff0c;可以参考我之前…

解决代理IP负载均衡与性能优化的双重挑战

在当今数字化时代&#xff0c;代理IP的应用范围日益广泛&#xff0c;它不仅在数据爬取、网络抓取等领域发挥着重要作用&#xff0c;也成为网络安全和隐私保护的有力工具。然而&#xff0c;面对庞大的数据流量和复杂的网络环境&#xff0c;如何实现代理IP的负载均衡和性能优化成…

反复 Failed to connect to github.com port 443 after xxx ms

前提&#xff1a;使用了代理&#xff0c;浏览器能稳定访问github&#xff0c;但git clone一直超时 解决方案&#xff1a; 1. git config --global http.proxy http://127.0.0.1:1080 2. 代理设置端口1080 3. 1080可自定义 感谢来自这篇博客和评论区的提醒&#xff1a;解决…

Flutter 状态组件 InheritedWidget

Flutter 状态组件 InheritedWidget 视频 前言 今天会讲下 inheritedWidget 组件&#xff0c;InheritedWidget 是 Flutter 中非常重要和强大的一种 Widget&#xff0c;它可以使 Widget 树中的祖先 Widget 共享数据给它们的后代 Widget&#xff0c;从而简化了状态管理和数据传递…

SpringBoot的三层架构以及IOCDI

目录 一、IOC&DI入门 二、三层架构 数据库访问层 业务逻辑层 控制层 一、IOC&DI入门 在软件开发中&#xff0c;IOC&#xff08;Inversion of Control&#xff09;和DI&#xff08;Dependency Injection&#xff09;是密切相关的概念。 IOC&#xff08;控制反转&a…

常见的远程代码执行漏洞的注入点和注入方式

10个常见的远程代码执行漏洞的注入点和注入方式的举例&#xff1a; 用户输入&#xff1a;当用户输入未经验证和过滤的数据被用于构建动态命令或查询时&#xff0c;攻击者可以通过输入恶意代码来执行远程命令。 文件上传功能&#xff1a;如果文件上传功能没有正确地验证和限制上…

webpack如何实现热更新?

webpack如何实现热更新&#xff1f; 要使用 Webpack 实现热更新&#xff0c;可以按照以下步骤进行配置&#xff1a; 1.在项目中安装 Webpack 和相关的开发依赖&#xff1a; npm install webpack webpack-cli webpack-dev-server --save-dev2.创建一个名为 webpack.dev.js 的…

flask中的蓝图

flask中的蓝图 在 Flask 中&#xff0c;蓝图&#xff08;Blueprint&#xff09;是一种组织路由和服务的方法&#xff0c;它允许你在应用中更灵活地组织代码。蓝图可以大致理解为应用或者应用中的一部分&#xff0c;可以在蓝图中定义路由、错误处理程序以及静态文件等。然后可以…

Qt 调用 Microsoft Excel 组件生成 Excel 文档

在.pro文件中添加模块: QT += core gui axcontainer参考界面:界面中只有一个 pushButton 按钮。 参考代码: mainwindow.h: #ifndef MAINWINDOW_H #define MAINWINDOW_H#include <QMainWindow> #include

【leetcode】541. 反转字符串 II

给定一个字符串 s 和一个整数 k&#xff0c;从字符串开头算起&#xff0c;每计数至 2k 个字符&#xff0c;就反转这 2k 字符中的前 k 个字符。 如果剩余字符少于 k 个&#xff0c;则将剩余字符全部反转。 如果剩余字符小于 2k 但大于或等于 k 个&#xff0c;则反转前 k 个字符…

细讲TCP三次握手四次挥手(一)

计算机网络体系结构 在计算机网络的基本概念中&#xff0c;分层次的体系结构是最基本的。计算机网络体系结构的抽象概念较多&#xff0c;在学习时要多思考。这些概念对后面的学习很有帮助。 网络协议是什么&#xff1f; 在计算机网络要做到有条不紊地交换数据&#xff0c;就必…

Unity 性能优化二:内存问题

目录 策略导致的内存问题 GFX内存 纹理资源 压缩格式 Mipmap 网格资源 Read/Write 顶点数据 骨骼 静态合批 Shader资源 Reserved Memory RenderTexture 动画资源 音频资源 字体资源 粒子系统资源 Mono堆内存 策略导致的内存问题 1. Assetbundle 打包的时候…

项目——负载均衡在线OJ

目录 项目介绍开发环境所用技术项目宏观结构编写思路1. 编写compile_server1.1 编译模块编写1.2 运行功能1.3compile_runner 编译与运行1.4 编写compile_server.cpp调用compile_run模块&#xff0c;形成网络服务 2. 编写基于MVC的oj_server2.1 oj_server.cpp的编写2.2 oj_model…

后端性能测试的类型

目录 性能测试的类型 负载测试(load testing) 压力测试(Stress Testing) 可扩展性测试( 尖峰测试(Spike Testing) 耐久性测试(Endurance Testing) 并发测试(Concurrency Testing) 容量测试(Capacity Testing) 资料获取方法 性能测试的类型 性能测试&#xff1a;确定软…

pytorch模型的保存与加载

1 pytorch保存和加载模型的三种方法 PyTorch提供了三种方式来保存和加载模型&#xff0c;在这三种方式中&#xff0c;加载模型的代码和保存模型的代码必须相匹配&#xff0c;才能保证模型的加载成功。通常情况下&#xff0c;使用第一种方式&#xff08;保存和加载模型状态字典…

【Linux下6818开发板(ARM)】硬件空间挂载

(꒪ꇴ꒪ ),hello我是祐言博客主页&#xff1a;C语言基础,Linux基础,软件配置领域博主&#x1f30d;快上&#x1f698;&#xff0c;一起学习&#xff01;送给读者的一句鸡汤&#x1f914;&#xff1a;集中起来的意志可以击穿顽石!作者水平很有限&#xff0c;如果发现错误&#x…

linux----源码安装如何加入到系统服务中(systemclt)

将自己源码安装的软件加入到系统服务中。例如nginx,mysql 就以nginx为例&#xff0c;源码安装&#xff0c;加入到系统服务中 使用yum安装nginx&#xff0c;自动会加入到系统服务 16-Linux系统服务 - 刘清政 - 博客园 (cnblogs.com) 第一步: 源码安装好nginx之后&#xff0…