鲲鹏麒麟安装Kafka-v1.1.1

因项目需要在鲲鹏麒麟服务器上安装Kafka v1.1.1,因此这里将安装配置过程记录下来。

环境说明

# 查看系统相关详细信息
[root@test kafka_2.12-1.1.1]# uname -a
Linux test.novalocal 4.19.148+ #1 SMP Mon Oct 5 22:04:46 EDT 2020 aarch64 aarch64 aarch64 GNU/Linux
# 查看操作系统版本信息
[root@test kafka_2.12-1.1.1]# cat /etc/kylin-release 
Kylin Linux Advanced Server release V10 (Tercel)
# 查看逻辑CPU数量
[root@test kafka_2.12-1.1.1]# cat /proc/cpuinfo| grep "processor"| wc -l
32
# 查看CPU信息
[root@test kafka_2.12-1.1.1]# lscpu
Architecture:                    aarch64
CPU op-mode(s):                  64-bit
Byte Order:                      Little Endian
CPU(s):                          32
On-line CPU(s) list:             0-31
Thread(s) per core:              1
Core(s) per socket:              16
Socket(s):                       2
NUMA node(s):                    2
Vendor ID:                       HiSilicon
Model:                           0
Model name:                      Kunpeng-920
Stepping:                        0x1
CPU max MHz:                     2400.0000
CPU min MHz:                     2400.0000
BogoMIPS:                        200.00
L1d cache:                       2 MiB
L1i cache:                       2 MiB
L2 cache:                        16 MiB
L3 cache:                        64 MiB
NUMA node0 CPU(s):               0-15
NUMA node1 CPU(s):               16-31
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Not affected
Vulnerability Spectre v1:        Mitigation; __user pointer sanitization
Vulnerability Spectre v2:        Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm
# 查看Java的版本信息
[root@test kafka_2.12-1.1.1]# java -version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (build 1.8.0_242-b08)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)

下载

从apache官网下载相应的版本信息,地址:https://kafka.apache.org/downloads,找到1.1.1版本下载即可,这里下载kafka_2.12-1.1.1.tgz,下载地址:https://archive.apache.org/dist/kafka/1.1.1/kafka_2.12-1.1.1.tgz
这里的版本号有两段,前一段为Scala的版本号,后一段为Kafka的版本,现在最新版本已经到3.X,这里用1.1.1,主要是项目特殊需要,建议还是用高版本的。

Scala 2.11 和 Scala 2.12 是 Scala 编程语言的两个主要版本,它们之间存在一些关键的区别
1. 性能优化Scala 2.12 在性能方面做了很多优化,特别是在 JVM 上。它引入了值类(Value Classes)的改进,减少了运行时的开销。Scala 2.11 的性能相对较低,但它的优化主要集中在稳定性和兼容性上。
2. 字符串插值Scala 2.12 引入了新的字符串插值语法,使用 s 前缀,例如 s"Hello, $name"。Scala 2.11 使用的是旧的字符串插值语法,使用 #{},例如 "Hello, #{name}"。
3. 隐式转换Scala 2.12 对隐式转换进行了一些改进,使其更加安全和易于理解。Scala 2.11 的隐式转换机制相对较为复杂,容易引起混淆。
4. 模块化Scala 2.12 引入了模块化系统,允许开发者将代码分割成多个模块,便于管理和维护。Scala 2.11 没有模块化系统,所有代码都放在一个项目中。
5. 兼容性Scala 2.12 相对于 Scala 2.11 有一些不兼容的更改,特别是在 API 和库的使用上。因此,从 Scala 2.11 迁移到 Scala 2.12 可能需要一些工作。Scala 2.11 是一个长期支持(LTS)版本,这意味着它将获得更长时间的支持和维护。
6. 社区支持Scala 2.12 是目前的主流版本,得到了广泛的社区支持和库的更新。Scala 2.11 虽然仍然在使用,但社区支持逐渐减少。
总结如果你正在开发一个新的项目,建议使用 Scala 2.12 或更高版本,以获得更好的性能和更多的功能。如果你正在维护一个现有的 Scala 2.11 项目,可以考虑在适当的时候迁移到 Scala 2.12,但需要注意兼容性问题。

部署

1、解压文件执行
tar -zxvf kafka_2.12-1.1.1.tgz
2、解压后路径为
[root@test kafka_2.12-1.1.1]# pwd
/data/public/kafka/kafka_2.12-1.1.1
3、修改config/server.properties,注意修改listeners和advertised.listeners即可

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.
broker.id=0############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://192.138.31.100:9092# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://192.138.31.100:9092# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=409600000############################# Log Basics ############################## A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

启动

1、先启动Zookeeper,执行
nohup /data/public/kafka/kafka_2.12-1.1.1/bin/zookeeper-server-start.sh /data/public/kafka/kafka_2.12-1.1.1/config/zookeeper.properties > /dev/null 2>&1 &

2、然后启动Kafka,执行
nohup /data/public/kafka/kafka_2.12-1.1.1/bin/kafka-server-start.sh /data/public/kafka/kafka_2.12-1.1.1/config/server.properties > /dev/null 2>&1 &

3、检查是否启动

[root@test kafka_2.12-1.1.1]# netstat -nltp | grep -E '(2181|9092)'
tcp6       0      0 192.168.31.100:9092     :::*                    LISTEN      970022/java         
tcp6       0      0 :::2181                 :::*                    LISTEN      966577/java

表明Kafka已经启动起来,其中2181为Zookeeper的端口,9092为Kafka的端口

开启客户端访问

firewall-cmd --zone=public --add-port=2181/tcp --permanent
firewall-cmd --zone=public --add-port=9092/tcp --permanent
firewall-cmd --reload

使用Offset Explorer 2.0访问

在这里插入图片描述
在这里插入图片描述
配置完成后,连接服务器,这时就可以使用Kafka进行生产和消费消息了。
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/web/62677.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

群控系统服务端开发模式-应用开发-登录退出发送邮件

一、登录成功发送邮件 在根目录下app文件夹下controller文件夹下common文件夹下&#xff0c;修改Login.php&#xff0c;代码如下 <?php /*** 登录退出操作* User: 龙哥三年风水* Date: 2024/10/29* Time: 15:53*/ namespace app\controller\common; use app\controller\Em…

[游戏开发] Unity中使用FlatBuffer

什么是FlatBuffer 官网&#xff1a; GitHub - google/flatbuffers: FlatBuffers: Memory Efficient Serialization LibraryFlatBuffers: Memory Efficient Serialization Library - google/flatbuffershttps://github.com/google/flatbuffers 为什么用FloatBuffer&#xff0c…

MySQL其一,概念学习,可视化软件安装以及增删改查语句

目录 MySQL 1、数据库的概念 2、数据库分类 3、MySQL的安装 4、安装过程中的问题 DataGrip的使用&#xff1a; SQLynx的使用&#xff1a; 5、编写SQL语句 6、DDL语句 7、DML 新增数据&#xff1a; 删除数据&#xff1a; 修改数据&#xff1a; MySQL SQL其实是一门…

05 在 Linux 使用 AXI DMA

DMA简介 DMA 是一种采用硬件实现存储器与存储器之间或存储器与外设之间直接进行高速数据传输的技术&#xff0c;传输过程无需 CPU 参与&#xff08;但是CPU需要提前配置传输规则&#xff09;&#xff0c;可以大大减轻 CPU 的负担。 DMA 存储传输的过程如下&#xff1a; CPU 向…

linux 安装 vsftpd 服务以及配置全攻略,vsftpd 虚拟多用户多目录配置,为每个用户配置不同的使用权限

linux 安装 vsftpd 服务以及配置全攻略&#xff0c;vsftpd 虚拟多用户多目录配置&#xff0c;为每个用户配置不同的使用权限。 linux 安装 vsftpd 服务以及配置全攻略 FTP 是 File Transfer Protocol 的简称&#xff0c;用于 Internet 上的控制文件的双向传输。同时&#xff0…

SQL语句在MySQL中如何执行

MySQL的基础架构 首先就是客户端&#xff0c;其次Server服务层&#xff0c;大多数MySQL的核心服务都在这一层&#xff0c;包括连接、分析、优化、缓存以及所有的内置函数&#xff08;时间、日期、加密函数&#xff09;&#xff0c;所有跨存储引擎功能都在这一层实现&#xff1…

ragflow连不上ollama的解决方案

由于前期wsl默认装在C盘&#xff0c;后期部署好RagFlow后C盘爆红&#xff0c;在连接ollama的时候一直在转圈圈&#xff0c;问其他人没有遇到这种情况&#xff0c;猜测是因为内存不足无法加载模型导致&#xff0c;今天重新在E盘安装wsl 使用wsl装Ubuntu Win11 wsl-安装教程 如…

C#常见错误—空对象错误

System.NullReferenceException&#xff1a;未将对象引用设置到对象的实例 在C#编程中&#xff0c;System.NullReferenceException是一个常见的运行时异常&#xff0c;其错误信息“未将对象引用设置到对象的实例”意味着代码试图访问一个未被初始化或已被设置为null的对象的成…

沁恒CH32V208蓝牙串口透传例程:修改透传的串口;UART-CH32V208-APP代码分析;APP-CH32V208-UART代码分析

从事嵌入式单片机的工作算是符合我个人兴趣爱好的,当面对一个新的芯片我即想把芯片尽快搞懂完成项目赚钱,也想着能够把自己遇到的坑和注意事项记录下来,即方便自己后面查阅也可以分享给大家,这是一种冲动,但是这个或许并不是原厂希望的,尽管这样有可能会牺牲一些时间也有哪天原…

Scala的隐式对象

Scala中&#xff0c;隐式对象&#xff08;implicit object&#xff09;是一种特殊的对象&#xff0c;它可以使得其成员&#xff08;如方法和值&#xff09;在特定的上下文中自动可用&#xff0c;而无需显式地传递它们。隐式对象通常与隐式参数和隐式转换一起使用&#xff0c;以…

矩阵的乘(包括乘方)和除

矩阵的乘分为两种&#xff1a; 一种是高等代数中对矩阵的乘的定义&#xff1a;可以去这里看看包含矩阵的乘。总的来说&#xff0c;若矩阵 A s ∗ n A_{s*n} As∗n​列数和矩阵 B n ∗ t B_{n*t} Bn∗t​的行数相等&#xff0c;则 A A A和 B B B可相乘&#xff0c;得到一个矩阵 …

DVWA亲测sql注入漏洞

LOW等级 我们先输入1 我们加上一个单引号&#xff0c;页面报错 我们看一下源代码&#xff1a; <?php if( isset( $_REQUEST[ Submit ] ) ) { // Get input $id $_REQUEST[ id ]; // Check database $query "SELECT first_name, last_name FROM users WHERE user_id …

机器学习01-发展历史

机器学习01-发展历史 文章目录 机器学习01-发展历史1-传统机器学习的发展进展1. 初始阶段&#xff1a;统计学习和模式识别2. 集成方法和核方法的兴起3. 特征工程和模型优化4. 大规模数据和分布式计算5. 自动化机器学习和特征选择总结 2-隐马尔科夫链为什么不能解决较长上下文问…

想了解操作系统,有什么书籍推荐?

推荐一本操作系统经典书&#xff1a; 操作系统导论 《操作系统导论》虚拟化(virtualization)、并发(concurrency)和持久性(persistence)。这是我们要学习的3个关键概念。通过学习这3个概念&#xff0c;我们将理解操作系统是如何工作的&#xff0c;包括它如何决定接下来哪个程序…

[Collection与数据结构] 位图与布隆过滤器

&#x1f338;个人主页:https://blog.csdn.net/2301_80050796?spm1000.2115.3001.5343 &#x1f3f5;️热门专栏: &#x1f9ca; Java基本语法(97平均质量分)https://blog.csdn.net/2301_80050796/category_12615970.html?spm1001.2014.3001.5482 &#x1f355; Collection与…

微信小程序横屏页面跳转后,自定义navbar样式跑了?

文章目录 问题原因&#xff1a;解决方案&#xff1a; 今天刚遇到的问题&#xff0c;横屏的页面完成操作后跳转页面后&#xff0c;自定义的tabbar样式乱了&#xff0c;跑到最顶了&#xff0c;真机调试后发现navbar跑到手机状态栏了&#xff0c;它正常应该跟右边胶囊一行。 知道问…

Vivado ILA数据导出MATLAB分析

目录 ILA数据导出 分析方式一 分析方式二 有时候在系统调试时&#xff0c;数据在VIVADO窗口获取的信息有限&#xff0c;可结合MATLAB对已捕获的数据进行分析处理 ILA数据导出 选择信号&#xff0c;单击右键后&#xff0c;会有export ILA DATA选项&#xff0c;将其保存成CS…

《探索形象克隆:科技与未来的奇妙融合》

目录 一、什么是形象克隆 二、形象克隆的技术原理 三、形象克隆的发展现状 四、形象克隆的未来趋势 五、形象克隆的应用场景 六、形象克隆简单代码案例 Python 实现数字人形象克隆 Scratch 实现角色克隆效果&#xff08;以猫为例&#xff09; JavaScript 实现 Scratc…

MATLAB深度学习(七)——ResNet残差网络

一、ResNet网络 ResNet是深度残差网络的简称。其核心思想就是在&#xff0c;每两个网络层之间加入一个残差连接&#xff0c;缓解深层网络中的梯度消失问题 二、残差结构 在多层神经网络模型里&#xff0c;设想一个包含诺干层自网络&#xff0c;子网络的函数用H(x)来表示&#x…

前端入门之VUE--vue组件化编程

前言 VUE是前端用的最多的框架&#xff1b;这篇文章是本人大一上学习前端的笔记&#xff1b;欢迎点赞 收藏 关注&#xff0c;本人将会持续更新。 文章目录 2、Vue组件化编程2.1、组件2.2、基本使用2.2.1、VueComponent 2、Vue组件化编程 2.1、组件 组件&#xff1a;用来实现…