基本概念
Zookeeper工作机制
- 从设计模式角度理解: 是一个基于观察者模式设计的分布式服务管理框架; 负责存储和管理大家都关心的数据, 一旦这些数据的状态发生变化, Zookeeper就将负责通知已经在Zookeeper上注册的那些观察值做出相应的反应.
Zookeeper特点
- Zookeeper有: 一个领导者(Leader), 多个跟随者(Follower)组成的集群
- 集群中只要有半数以上节点存活, Zookeeper集群就能正常服务, 所以Zookeeper适合安装数台服务器
- 全局数据一致: 每个Server保存一份相同的数据副本, Client无论连接哪个Server, 数据都是一致的
- 更新请求顺序执行: 来自同一个Client的更新请求按其发送顺序依次执行
- 数据更新原子性: 一次数据更新要么成功, 要么失败
- 实时性: 在一定时间范围内, Client能读到最新的数据
数据结构
- Zookeeper数据模型与Unix文件系统类似
- 每个节点称作ZNode
- 每个ZNode都能够存储1MB数据
- 每个ZNode都可以通过其路径唯一标识
选举机制
Zookeeper第一次选举机制
假设有5台服务器, 且按顺序启动, 则会发生:
- 服务器1启动, 发起一次选举, 投自己一票. 此时服务器1票数为1, 选举无法完成, 状态保持为
Looking
- 服务器2启动, 再发起一次选举. 服务器1和服务器2分别投自己1票并交换选票信息, 服务器1因为**
myid
小于服务器2, 所以改选票为推举服务器2**. 此时服务器1为0票, 服务器2为2票. - 服务器3启动; 发起一次选举, 根据之前的交换规则, 服务器1/2都会投票服务器3, 此时选票超过半数, 服务器1,2将状态改为
FOLLOWING
, 服务器3状态改为LEADER
- 服务器4启动, 发起一次选举, 根据少数服从多数原则, 服务器4将自己选票投给服务器3, 并将自己的状态改为
FOLLOWER
- 服务器5同服务器4, 将选票给服务器3, 并将自己状态设置为
FOLLOWER
其中, 每个节点都有:
- 服务器ID(SID): 用来唯一标识Zookeeper集群中的机器, 每台机器不能重复, 与
myid
一致 - 事务ID(ZXID): ZXID是一个事务ID, 用来标识一次服务器状态的变更, 在某一时刻, 集群中的每一台机器的ZXID值不一定完全一致, 这和ZooKeeper服务器对客户端"更新请求"的处理逻辑相关
- Epoch: 每个Leader任期的代号. 没有Leader时同一轮投票过程中的逻辑时钟值是相同的. 每投完一次票这个数据就会增加
ZooKeeper非第一次选举机制
上述情况下, Leader宕机
- 当出现以下情况, 就会发生重新选举
- 服务器初始化启动
- 服务器运行期间无法和Leader保持连接
- 当一台服务器进入Leader选举流程时, 集群可能会处于一下两种状态
- 集群中存在Leader: 选举过程中, 宕机服务器会被告知Leader的信息, 之后会再与Leader建立连接恢复原状态
- 集群中不存在Leader, 且存在半数或以上的服务器正常: 正常服务器按照(EPOCH, ZXID, SID)进行排序选举
读写机制
写数据原理
访问Leader
- 写Leader
- Leader同步超过半数节点
- 返回ack
- Leader继续同步后面节点
访问Follower
- 写Follower
- Follower将请求转发给Leader
- Leader先完成写, 然后通知Follower写
- 当超过半数完成写后, 应答Follower
ack
- Follower应答客户ACK
- Leader继续同步剩余节点
读流程
- 客户端请求Zookeeper中的数据是通过访问集群中的某一节点通常是配置的url对应的节点, 然后从该节点总请求数据
- 由于Zookeeper只保证了系统的强一致性, 并没有保证所有节点都完成写操作后再返回ACK, 因此客户端可能请求到脏数据
节点类型
节点分类
节点类型有以下几种
- 持久(Persistent): 客户端和服务端断开连接后, 创建的节点不删除
- 短暂(Ephemeral): 客户端和服务端断开连接后, 创建的节点自己删除
- 持久化目录节点: 客户端和ZooKeeper断开连接后, 节点依旧存在 顺序号可以为所有的顺序进行全局排序, 这样客户端可以通过序号判断事件的顺序
- 持久化顺序编号节点: 客户端与Zookeeper断开后, 节点依旧存在, 只是ZooKeeper给该节点进行顺序编号
- 临时目录节点: 断开连接后, 节点被删除
- 临时顺序编号目录节点: 断开连接后, 节点被删除, ZooKeeper给该节点名称进程顺序编号
节点增删查改
# 创建节点
[zk: localhost:2181(CONNECTED) 3] create /hello "data"
Created /hello
[zk: localhost:2181(CONNECTED) 5] create /hello/world "data in world"
Created /hello/world
# 查看节点内容
[zk: localhost:2181(CONNECTED) 8] ls /hello
[world]
# 查看节点信息
[zk: localhost:2181(CONNECTED) 10] ls -s /hello
[world]
cZxid = 0x500000006
ctime = Sun Aug 20 03:57:56 UTC 2023
mZxid = 0x500000006
mtime = Sun Aug 20 03:57:56 UTC 2023
pZxid = 0x500000008
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 1
# 仅查看节点信息
[zk: localhost:2181(CONNECTED) 11] stat /hello
cZxid = 0x500000026
ctime = Sun Aug 20 04:45:24 UTC 2023
mZxid = 0x500000026
mtime = Sun Aug 20 04:45:24 UTC 2023
pZxid = 0x500000026
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 0
# 获取节点中的值
[zk: localhost:2181(CONNECTED) 12] get -s /hello
data
cZxid = 0x500000006
ctime = Sun Aug 20 03:57:56 UTC 2023
mZxid = 0x500000006
mtime = Sun Aug 20 03:57:56 UTC 2023
pZxid = 0x500000008
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 1
# 创建带序号的节点
[zk: localhost:2181(CONNECTED) 14] create -s /hello/seqworld "data with sequence"
Created /hello/seqworld0000000001 #这里ZooKeeper自动为节点添加序号
[zk: localhost:2181(CONNECTED) 1] create -s /hello/seqworld "data with sequence 2"
Created /hello/seqworld0000000002
[zk: localhost:2181(CONNECTED) 0] create /hello/world "data with sequence 2"
Node already exists: /hello/world
# 可以看到带序号的可以重复添加, 而不带序号的不能重复添加###########################临时节点############################################
# 创建临时节点
[zk: localhost:2181(CONNECTED) 2] create -e /hello/tmpworld "temp hello data"
Created /hello/tmpworld
[zk: localhost:2181(CONNECTED) 5] ls /hello
[seqworld0000000001, seqworld0000000002, tmpworld, world]
# 创建临时顺序编号节点
[zk: localhost:2181(CONNECTED) 3] create -es /hello/tmp_seq_world "temp hello data"
Created /hello/tmp_seq_world0000000004
# 退出并重进客户端
[zk: localhost:2181(CONNECTED) 6] quit
root@4dc78c0fb310:/apache-zookeeper-3.8.2-bin# zkCli.sh
# 发现临时节点全部被删除
[zk: localhost:2181(CONNECTED) 0] ls /hello
[seqworld0000000001, seqworld0000000002, world]############################修改节点###########################
# 修改/hello/world节点下的值
[zk: localhost:2181(CONNECTED) 2] set /hello/world "new data"
[zk: localhost:2181(CONNECTED) 4] get -s /hello/world
new data
cZxid = 0x500000008
ctime = Sun Aug 20 03:58:34 UTC 2023
mZxid = 0x500000018
mtime = Sun Aug 20 04:28:15 UTC 2023
pZxid = 0x500000008
cversion = 0
dataVersion = 2
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 8
numChildren = 0#################################删除节点#########################
# 删除单个节点
[zk: localhost:2181(CONNECTED) 3] delete /hello/world
[zk: localhost:2181(CONNECTED) 5] ls /hello
[listen, new, seqworld0000000001, seqworld0000000002]
# 递归删除节点
[zk: localhost:2181(CONNECTED) 9] deleteall /hello
[zk: localhost:2181(CONNECTED) 8] ls /hello
Node does not exist: /hello
监听器
监听器原理
- 在
main
线程中创建客户端 - 之后会创建两个线程, 一个负责网络通信(connect), 另一个负责监听
- 通过
connect线程
将注册的监听事件发送给Zookeeper - Zookeeper的注册监听器列表汇总将注册的监听事件列表添加到列表中
- Zookeeper监听到有数据或路径的变化, 会将这个消息发给listener线程
listener线程
内部调用了proces()
方法
监听节点
# 创建节点
[zk: localhost:2181(CONNECTED) 6] create /hello/listen "data under listening"
Created /hello/listen
# 监听节点
[zk: localhost:2181(CONNECTED) 7] get -w /hello/listen
data under listening
[zk: localhost:2181(CONNECTED) 9] set /hello/listen "new data under listening"
WatchedEvent state:SyncConnected type:NodeDataChanged path:/hello/listen
# 监控节点路径的变化
[zk: localhost:2181(CONNECTED) 10] ls -w /hello
[listen, seqworld0000000001, seqworld0000000002, world]
# 创建子节点
[zk: localhost:2181(CONNECTED) 12] create /hello/new "data"
WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/hello
Created /hello/new
应用场景
统一命名服务
- 在分布式环境经常需要对应用/服务进行统一命名, 便于识别
统一配置管理:
-
分布式环境下, 配置环境需要同步
- 分布式环境下所有节点的配置信息应该是一致的
- 对配置文件修改后, 希望能够快速同步到各个节点上
-
配置管理可交由Zookeeper实现
- 可将配置信息写入Zookeeper上的一个
ZNode
- 各个客户端服务器监听这个
ZNode
- 可将配置信息写入Zookeeper上的一个
统一集群管理
- 分布式环境中, 实时掌握每个节点的状态是必要的
- 可以根据节点实时状态做出一些调整
- ZooKeeper可以实现实时监控节点的状态变化
- 可以将节点信息写入ZooKeeper上的一个ZNode
- 监听这个ZNode可以获取它的实时状态变化
服务器动态上下线
- 客户端能实时洞察到服务器上下线的变化
软负载均衡
- 在ZooKeeper中记录每台服务器的访问数, 让访问数最少得服务器去处理最新的客户端需求
基本使用
安装
在多台电脑下:
version: "3"
services:zookeeper:container_name: zookeeper restart: alwaysprivileged: trueimage: zookeeperports:- 20012:2181- 20013:2888- 20014:3888volumes:- /opt/docker/zookeeper/conf:/conf- /opt/docker/zookeeper/data:/data- /opt/docker/zookeeper/logs:/datalog
使用docker-compose
构建容器
docker-compose up -d
集群配置
这里有两台电脑, 域名分别为:
server.passnight.local
replica.passnight.local
follower.passnight.local
他们默认的终端prompt分别为:
passnight@passnight-s600
passnight@passnight-acepc
passnight@passnight-centerm
集群配置流程
# 配置server的myid
passnight@passnight-s600:/opt/docker/zookeeper/data$ sudo vim myid
1
:wq
# 配置replica的myid
passnight@passnight-acepc:/opt/docker/zookeeper/data$ sudo vim myid
2
:wq
# 重启Zookeeper
passnight@passnight-acepc:/opt/docker/zookeeper/data$ docker restart zookeeper
zookeeper
passnight@passnight-s600:/opt/docker/zookeeper/data$ docker restart zookeeper
zookeeper
之后再各个配置文件中配置集群服务器信息; 这里注意ip一定要用本机ip; 端口一定要对应容器内端口; 否则会出现连接失败的问题.
# 数据路径
dataDir=/data
dataLogDir=/datalog
# 心跳时间, Zookeeper服务器与客户端/服务器与服务端信条间隔,单位为毫秒
tickTime=2000
# Leader和Follower初始化通信时限, 单位为秒
initLimit=5
# 同步时间, 如果超过同步时间, Leader认为Follower下线, 并从服务列表中删除Follower
# clientPort=2181; 客户端端口号, 通常不做修改
syncLimit=2
autopurge.snapRetainCount=3
autopurge.purgeInterval=0
maxClientCnxns=60
standaloneEnabled=true
admin.enableServer=true
# 格式: server.[服务器编号]=[服务器地址][服务器Follower与Leader交换信息的端口][用于重新选举的端口]
# 记住这里的本机地址一定要用0.0.0.0, 端口一定要用容器内的端口, 否则会出现网络连接失败的问题
server.1=0.0.0.0:2888:3888
server.2=replica.passnight.local:20013:20014
server.3=follower.passnight.local:20013:20014
进入容器检查连接状态
# 第一台机器
passnight@passnight-s600:/opt/docker/zookeeper$ docker exec -it zookeeper bash
root@4dc78c0fb310:/apache-zookeeper-3.8.2-bin# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
# 第二台机器
passnight@passnight-centerm:/opt/docker/zookeeper$ docker exec -it zookeeper bash
root@b0c269f3f8cc:/apache-zookeeper-3.9.0-bin# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
# 第三台机器
passnight@passnight-acepc:/opt/docker/zookeeper$ docker exec -it zookeeper bash
root@0daabd03b9e0:/apache-zookeeper-3.8.2-bin# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
客户端命令行操作
# 启动客户端
root@4dc78c0fb310:/apache-zookeeper-3.8.2-bin# zkCli.sh
Connecting to localhost:2181
# ....一些日志
[zk: localhost:2181(CONNECTED) 0]
znode节点数据信息
[zk: localhost:2181(CONNECTED) 7] ls -s /
cZxid = 0x0
ctime = Thu Jan 01 00:00:00 UTC 1970 # 创建znode的时间戳
mZxid = 0x0 # znode最后更新的事务zxid
mtime = Thu Jan 01 00:00:00 UTC 1970 # znode最后修改的毫秒数
pZxid = 0x2e5 # znode最后更新的子节点zxid
cversion = 34 # # znode子节点变化号, znode子节点修改次数
dataVersion = 0 # znode数据变化号
aclVersion = 0 # znode访问控制表变化号
ephemeralOwner = 0x0 # 如果是临时节点, 则是znode拥有者的sessionid; 否则是0
dataLength = 0 # znode数据长度
numChildren = 12 # znode子节点数量
客户端API
依赖:
<!-- https://mvnrepository.com/artifact/org.apache.zookeeper/zookeeper -->
<dependency><groupId>org.apache.zookeeper</groupId><artifactId>zookeeper</artifactId><version>3.9.0</version>
</dependency>
测试环境
package com.passnight.zookeeper.client;import lombok.SneakyThrows;
import org.apache.zookeeper.CreateMode;
import org.apache.zookeeper.KeeperException;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.ZooDefs.Ids;
import org.apache.zookeeper.ZooKeeper;
import org.junit.Before;
import org.junit.Test;import java.nio.charset.StandardCharsets;public class ZooKeeperClientTest {private final String connectString = "server.passnight.local:20012,follower.passnight.local:20012,replica.passnight.local:20012";private final int sessionTimeOut = 2000;private final Watcher watcher = event -> {};ZooKeeper zooKeeper;@Before@SneakyThrowspublic void init() {zooKeeper = new ZooKeeper(connectString, sessionTimeOut, watcher);}
}
增删查改
增加节点
@Testpublic void create() throws InterruptedException, KeeperException {String response = zooKeeper.create("/hello", "data in /hello".getBytes(StandardCharsets.UTF_8), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);System.out.println(response);}
// /hello
读取节点
@Testpublic void getChildren() throws InterruptedException, KeeperException {List<String> children = zooKeeper.getChildren("/", true);children.forEach(System.out::println);}
// zookeeper
// hello
判断节点是否存在
@Test
public void exists() throws InterruptedException, KeeperException {Stat exists = zooKeeper.exists("/data", false);System.out.println(exists == null ? "not exist" : "exist");// 输出: not existexists = zooKeeper.exists("/hello", false);System.out.println(exists == null ? "not exist" : "exist");// 输出: exist
}
监听节点
@Testpublic void watchChange() throws InterruptedException, KeeperException {zooKeeper.getChildren("/", (WatchedEvent event) -> {try {List<String> children = zooKeeper.getChildren("/", true);children.forEach(System.out::println);} catch (KeeperException | InterruptedException e) {throw new RuntimeException(e);}});TimeUnit.DAYS.sleep(1);}
// 此时使用zkCli添加节点
// [zk: localhost:2181(CONNECTED) 1] create /new "test the listener"// 控制台输出
// zookeeper
// hello
// new
使用案例
注册中心案例
- 分布式系统中, 主节点可以有多台, 可以动态上下线, 任意一台客户端都能实时感知到主节点服务器的上下线
- 当服务器上线, 通知Zookeeper自己上线, 并告知相关信息
- 当服务器下线, 若创建的实临时节点, 对应节点会被删除; 此时Zookeeper通过监听器通知客户端该服务器已下线
zookeeper配置:
package com.passnight.zookeeper.config;import org.apache.zookeeper.Watcher;public class ZookeeperConfig {public final static String connectString = "server.passnight.local:20012,follower.passnight.local:20012,replica.passnight.local:20012";public final static int sessionTimeOut = 2000;public final static Watcher emptyWatcher = event -> {};
}
节点创建
在服务器中创建servers
节点, 所谓注册中心的根节点
[zk: localhost:2181(CONNECTED) 10] create /servers "servers"
Created /servers
编写注册中心监听代码
package com.passnight.zookeeper.discovery;import com.passnight.zookeeper.config.ZookeeperConfig;
import lombok.extern.log4j.Log4j2;
import org.apache.zookeeper.KeeperException;
import org.apache.zookeeper.ZooKeeper;import java.io.IOException;
import java.util.List;
import java.util.concurrent.CountDownLatch;@Log4j2
public class DiscoveryServer {final static private String REGISTER_PATH = "/servers";private ZooKeeper zooKeeper;private List<String> activeServerList;CountDownLatch latch = new CountDownLatch(1);private void connect() throws IOException {zooKeeper = new ZooKeeper(ZookeeperConfig.connectString, ZookeeperConfig.sessionTimeOut, ZookeeperConfig.emptyWatcher);}private void listen() throws InterruptedException, KeeperException {activeServerList = zooKeeper.getChildren(REGISTER_PATH, (event) -> {try {listen();} catch (InterruptedException | KeeperException e) {throw new RuntimeException(e);}});log.info("server status changed: {}", activeServerList);}public static void main(String[] args) throws InterruptedException, KeeperException, IOException {DiscoveryServer server = new DiscoveryServer();server.connect();server.listen();server.latch.await();}
}
运行代码
# 使用zkCli创建节点
[zk: localhost:2181(CONNECTED) 46] create -es /servers/server1 "server1"
Created /servers/server10000000000# java客户端接收到事件
14:32:49.134 [main-EventThread] INFO com.passnight.zookeeper.discovery.Client - server status changed: [server10000000000]# 再创建一个节点
[zk: localhost:2181(CONNECTED) 47] create -es /servers/server2 "server2"
Created /servers/server20000000001
# java客户端依旧能接收到事件
14:33:15.711 [main-EventThread] INFO com.passnight.zookeeper.discovery.Client - server status changed: [server20000000001, server10000000000]# 删除节点
[zk: localhost:2181(CONNECTED) 51] delete /servers/server10000000000
# java服务端接收到了删除事件
14:40:07.598 [main-EventThread] INFO com.passnight.zookeeper.discovery.Client - server status changed: [server20000000001]
服务提供端
服务提端端代码
package com.passnight.zookeeper.discovery;import com.passnight.zookeeper.config.ZookeeperConfig;
import lombok.extern.log4j.Log4j2;
import org.apache.zookeeper.CreateMode;
import org.apache.zookeeper.KeeperException;
import org.apache.zookeeper.ZooDefs;
import org.apache.zookeeper.ZooKeeper;
import org.springframework.util.Assert;import java.io.IOException;
import java.net.InetAddress;
import java.nio.charset.StandardCharsets;
import java.util.concurrent.TimeUnit;@Log4j2
public class ServiceServer {private ZooKeeper zooKeeper;final private static String REGISTER_PATH = "/servers/server";private void connect() throws IOException {zooKeeper = new ZooKeeper(ZookeeperConfig.connectString, ZookeeperConfig.sessionTimeOut, ZookeeperConfig.emptyWatcher);}private void register(String hostname) throws InterruptedException, KeeperException {Assert.notNull(zooKeeper, "connect to the zookeeper before registration");zooKeeper.create(REGISTER_PATH, hostname.getBytes(StandardCharsets.UTF_8), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL);log.info("{} online", hostname);}private void serve() throws InterruptedException {TimeUnit.DAYS.sleep(Long.MAX_VALUE);}public static void main(String[] args) throws IOException, InterruptedException, KeeperException {ServiceServer serviceServer = new ServiceServer();serviceServer.connect();serviceServer.register(String.valueOf(InetAddress.getLocalHost()));serviceServer.serve();}
}
启动服务提供段
# 该日志表示服务提供端打印成功
14:43:26.363 [main] INFO com.passnight.zookeeper.discovery.ServiceServer - passnight-s600/fd12:4abe:6e6e:0:0:0:0:7f8 online# 该日志由注册中心打印, 表明服务提供端成功注册
14:43:26.369 [main-EventThread] INFO com.passnight.zookeeper.discovery.DiscoveryServer - server status changed: [server0000000002, server20000000001]
分布式锁
- 请求锁: 客户端创建临时顺序节点
- 序号小的节点获取锁, 处理业务; 否则对前一个节点监听
- 释放锁: 当监听到前一个锁释放, 则获得锁, 并处理前一步判断
分布式锁代码
package com.passnight.zookeeper.lock;import com.passnight.zookeeper.config.ZookeeperConfig;
import org.apache.zookeeper.*;import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.util.List;
import java.util.Objects;
import java.util.concurrent.Semaphore;public class DistributeLock {final static private String LOCK_PATH = "/locks";private ZooKeeper zooKeeper;private List<String> activeServerList;Semaphore lock = new Semaphore(0);private String waitingForLock = "";private final Watcher lockWatcher = (event) -> {// 刚连接上zookeeper, 没有客户端持有锁, 释放许可if (event.getState() == Watcher.Event.KeeperState.SyncConnected) {lock.release();}// 有节点释放锁, 释放许可if (event.getType() == Watcher.Event.EventType.NodeDeleted && event.getType().equals(waitingForLock)) {lock.release();}};private String currentLock;private void connect() throws IOException {zooKeeper = new ZooKeeper(ZookeeperConfig.connectString, ZookeeperConfig.sessionTimeOut, ZookeeperConfig.emptyWatcher);}public DistributeLock() throws IOException, InterruptedException, KeeperException {connect();if (Objects.isNull(zooKeeper.exists(LOCK_PATH, false))) {zooKeeper.create(LOCK_PATH, "locks".getBytes(StandardCharsets.UTF_8), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);}}public void lock() throws InterruptedException, KeeperException {currentLock = zooKeeper.create(LOCK_PATH + "/seq-", null, ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL).substring(LOCK_PATH.length() + 1);List<String> locks = zooKeeper.getChildren(LOCK_PATH, false);if (locks.size() > 1) { // 有超过一个锁请求locks.sort(String::compareTo);String minNode = locks.get(0);int currentNodeIndex = locks.indexOf(currentLock);if (!currentLock.equals(minNode)) { // 当前节点与最小节点不相等, 获取锁失败waitingForLock = locks.get(currentNodeIndex - 1);zooKeeper.getData(LOCK_PATH + "/" + waitingForLock, lockWatcher, null);lock.acquire();}}}public void unlock() throws InterruptedException, KeeperException {zooKeeper.delete(LOCK_PATH + "/" + currentLock, -1);}
}
测试用例
package com.passnight.zookeeper.lock;import lombok.SneakyThrows;
import lombok.extern.log4j.Log4j2;
import org.apache.zookeeper.KeeperException;import java.io.IOException;
import java.util.concurrent.TimeUnit;@Log4j2
public class DistributeLockTest {public static void main(String[] args) throws IOException, InterruptedException, KeeperException {final DistributeLock lock1 = new DistributeLock();final DistributeLock lock2 = new DistributeLock();new Thread(new Runnable() {@Override@SneakyThrowspublic void run() {lock1.lock();log.info("lock acquired");TimeUnit.SECONDS.sleep(3);lock1.unlock();log.info("lock released");}}).start();new Thread(new Runnable() {@Override@SneakyThrowspublic void run() {lock2.lock();log.info("lock acquired");TimeUnit.SECONDS.sleep(3);lock2.unlock();log.info("lock released");}}).start();}
}
可以得到日志
15:51:00.984 [Thread-1] INFO com.passnight.zookeeper.lock.DistributeLockTest - lock acquired
15:51:04.000 [Thread-0] INFO com.passnight.zookeeper.lock.DistributeLockTest - lock acquired
15:51:04.000 [Thread-1] INFO com.passnight.zookeeper.lock.DistributeLockTest - lock released
15:51:07.013 [Thread-0] INFO com.passnight.zookeeper.lock.DistributeLockTest - lock released
可以看到, 线程1先获取锁, 大约三秒后, 线程0才获取锁, 于此同时线程1释放锁; 再过了3秒, 线程0才释放锁
Curator
分布式锁
依赖
<!-- https://mvnrepository.com/artifact/org.apache.curator/curator-framework -->
<dependency><groupId>org.apache.curator</groupId><artifactId>curator-framework</artifactId><version>5.5.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.curator/curator-recipes -->
<dependency><groupId>org.apache.curator</groupId><artifactId>curator-recipes</artifactId><version>5.5.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.curator/curator-client -->
<dependency><groupId>org.apache.curator</groupId><artifactId>curator-client</artifactId><version>5.5.0</version>
</dependency>
使用ZooKeeper
客户端直接使用, 可能会存在以下问题:
- 异步会话连接需要自己处理
- Watch需要重复注册 分布式注册中心的
listen()
- 不支持多节点的删除和创建, 要自己递归
- 开发复杂性较高
测试用例
package com.passnight.zookeeper.lock;import com.passnight.zookeeper.config.ZookeeperConfig;
import lombok.SneakyThrows;
import lombok.extern.log4j.Log4j2;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.apache.curator.retry.ExponentialBackoffRetry;import java.util.concurrent.TimeUnit;@Log4j2
public class CuratorDistributeLock {public static void main(String[] args) {InterProcessMutex lock1 = new InterProcessMutex(getCuratorFramework(), "/locks");InterProcessMutex lock2 = new InterProcessMutex(getCuratorFramework(), "/locks");new Thread(new Runnable() {@Override@SneakyThrowspublic void run() {lock1.acquire();log.info("lock acquired");lock1.acquire();log.info("lock is reentrant");TimeUnit.SECONDS.sleep(3);lock1.release();lock1.release();log.info("lock released");}}).start();new Thread(new Runnable() {@Override@SneakyThrowspublic void run() {lock2.acquire();log.info("lock acquired");TimeUnit.SECONDS.sleep(3);lock2.release();log.info("lock released");}}).start();}private static CuratorFramework getCuratorFramework() {CuratorFramework curatorFramework = CuratorFrameworkFactory.builder().connectString(ZookeeperConfig.connectString).connectionTimeoutMs(ZookeeperConfig.sessionTimeOut).sessionTimeoutMs(ZookeeperConfig.sessionTimeOut).retryPolicy(new ExponentialBackoffRetry(300, 3)).build();curatorFramework.start();return curatorFramework;}
}
根据输出, 可以看到成功达到分布式锁的效果. 这里要注意可重入锁要多次释放, 不能只释放一次;
16:15:48.938 [Thread-0] INFO com.passnight.zookeeper.lock.CuratorDistributeLock - lock acquired
16:15:48.938 [Thread-0] INFO com.passnight.zookeeper.lock.CuratorDistributeLock - lock is reentrant
16:15:51.948 [Thread-0] INFO com.passnight.zookeeper.lock.CuratorDistributeLock - lock released
16:15:51.987 [Thread-1] INFO com.passnight.zookeeper.lock.CuratorDistributeLock - lock acquired
16:15:55.000 [Thread-1] INFO com.passnight.zookeeper.lock.CuratorDistributeLock - lock released
ZooKeeper算法基础
拜占庭将军问题
- 拜占庭将军是一个协议问题, 拜占庭帝国的将军们必须全体一致决定是否攻击敌军
- 但这些将军在地理位置上是分隔开来的, 且将军中存在叛徒
- 叛徒可以任意行动以达到目标: 欺骗将军, 促成一个不是所有将军都同意的决定, 迷惑将军使他们无法做出决定
Paxos
算法
- Paxos算法: 是一种基于消息传递且具有高度容错特性的一致性算法, 其保证一个分布式系统对某个数据达成一致, 且不论发生任何异常, 都不会破坏数据一致性
- 将所有节点划分为提议者(Proposer), 接受者(Acceptor)和学习者(Learner) 注意每个节点都可以身兼数职
- Paxos算法分为三个阶段
- Prepare(准备)阶段
- Proposer向多个Acceptor发出Propose请求Promise:
- Acceptor针对收到的Propose请求进行Promise
- Accept接受阶段
- Proposer收到多数Acceptor承诺的Promise后, 向Acceptor发出Propose
- Acceptor针对Propose请求进行Accept处理
- Learn学习阶段:
- Proposer将形成的决议发送给所有的Learner
- Prepare(准备)阶段
- 算法流程:
- Proposer生成全局唯一且递增的Proposal Id, 向所有Acceptor发送Propose请求 这里无需携带内容, 只需要携带Proposal Id
- :Acceptor收到Proposer请求后, 做出一个承诺, 两个应答
- 不再接受
Proposal ID
小于或等于当前请求的Proposer请求
- 不再接受
Proposal Id
小于当先请求的Accept请求
- 不违背以前做出的承诺下, 回复已经Accept过的提案中
Proposer ID
最大的那个天的Value
和Proposal ID
, 没有则返回空值
- 不再接受
- Propose: Proposer收到多数Acceptor的Promise应答后, 从应答中选择Proposal ID最大的提案的Value, 作为本次要发起的提案. 若所有应答的提案Value均为空值, 则自己可以随意决定天Value, 然后携带当前Proposal ID, 向所有Acceptor发送Propose请求
- Accept: Acceptor在收到Propose请求后, 在不违背自己之前做出的承诺下, 接受并持久化当前Proposal ID和提案Value
- Learn: Proposer收到多Acceptor的Accept后, 决议形成, 将决议发送给所有的Learner
- 存在问题:
- 倘若A1和A5都需要A3接受才能通过提案, 此时
- A1发送提案号1, A3承诺A1
- A5发送提案号2, A3承诺A5
- A1发送提案1, 无法获得A3的支持, 故重新发送提案3
- A3发送提案号3, A3承诺A1
- A5发送提案2, 无法获得A3的支持, 故重新发送提案
- 以此往复, 形成活锁
ZAB算法
- 为了解决上述活锁的问题, 限制只有一个
Leader
才能发送提案 - Zab协议包括两种模式: 消息广播/崩溃恢复
- 消息广播
-
客户端发起一个写请求
-
Leader将客户端请求转化为事务Proposal, 并为之分配一个全局id即zxid
-
Leader服务器为每个Follower服务器分配一个队列, 然后将需要广播的提案放入队列, 并根据FIFO策略进行发送
-
Follower接收到提案后, 首先会以事物日志的方式将其写入到磁盘中, 成功后返回一个
ACK
-
当Leader收到超过半数Follower的Ack响应消息后, 即认为消息发送成功, 可发送commit消息
-
Leader向所有Follower广播commit消息, 同时自身也完成事务提交. Follower接收到commit消息后, 会将上一条事务提交
- 崩溃恢复: 若Leader崩溃或网络原因导致Leader断线, 就会进入崩溃恢复模式
- 可能存在的情况:
- 事务在Leader提出之后, Leader崩溃
- 事务在Leader上提交了, 且过半follower响应ack, 但是Leader在Commit消息发出之前挂了
- 崩溃恢复要求满足以下要求
- 被Leader提交的提案, 必须被所有Follower服务器提交
- 确保丢弃已经被Leader提出的, 但是还没有被提交的提案
- 崩溃恢复主要包含两个部分: Leader选举和数据恢复
- 新的Leader必须满足以下条件:
- 新Leader不能包含未提交的提案
- 新选举的Leader节点中含有最大的zxid 这说明, 该节点数据时最新的
- Zab数据再次同步:
- 新Leader在正式开始工作前, Leader服务器会首先确认日志中所有的服务器是否已经被集群中过半的服务器Commit
- 只有当所有Follower将所有未同步的事务同步之后, 并应用到数据内存中, Leader才会将Follower加入到可用Follower列表当中
- 可能存在的情况:
CAP理论
CAP理论告诉我们, 一个分布式系统不可能同时满足以下三种:
- 一致性(Consistency): 多个副本之间是否能够保持一致的特性
- 可用性(Available): 系统必须一致处于可用的状态, 对于用户的每个操作总是能够在有限的时间内返回结果
- 分区容错性(Partition Tolerance): 分布式系统在遇到任何网络分区的故障的时候, 仍然能够保证对外提供满足一致性和可用性的服务
Zookeeper保证的是CPU, 原因如下:
- ZooKeeper不能保证服务请求的可用性, 因为在极端情况下, ZooKeeper会丢弃某些请求
- ZooKeeper在选举的时候不对外提供服务
ZooKeeper源码分析
源码地址: apache/zookeeper: Apache ZooKeeper (github.com)