单机集群测试
zk版本 3.4.6

一、下载zk

https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.4.6/apache-zookeeper-3.4.6-bin.tar.gz

二、解压

tar -zxvf zookeeper-3.4.6.tar.gz

三、复制3份

cp -r zookeeper-3.4.6 ./zk-cluster/zk1/
cp -r zookeeper-3.4.6 ./zk-cluster/zk2/
cp -r zookeeper-3.4.6 ./zk-cluster/zk3/

四、新建3个zk_data目录,用于存放zk数据,默认存放在zk根目录/data中,不建议放在zk根目录中

cd zk-cluster
mkdir data_zk1
mkdir data_zk2
mkdir data_zk3

至此,文件目录如下

├── data_zk1
├── data_zk2
├── data_zk3
├── zk1
├── zk2
└── zk3

五、在各个zk_data目录下,新建名为"myid"的文件,里面放置zk节点命名的id

echo 1 > zk1/myid
echo 2 > zk2/myid
echo 3 > zk3/myid

至此,文件目录如下

├── data_zk1
│   └── myid
├── data_zk2
│   └── myid
├── data_zk3
│   └── myid
├── zk1
├── zk2
└── zk3

六、设置各个zk中的配置文件信息

./zk1/conf/zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
	## 注意,这里配置每个zk的data文件夹路径
dataDir=/home/jiangyi/env/zookeeper/zk-cluster/data_zk1
# the port at which the clients will connect
	## 注意,这里每个zk的配置文件端口不能一样
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
	## 注意,server.*对应的是每个zkdata中的myid
server.1=127.0.0.1:2888:3888
server.2=127.0.0.1:2889:3889
server.3=127.0.0.1:2890:3890

./zk2/conf/zoo.cfg

## 与zk1/conf/zoo.cfg一致,只展示修改部分,省略......
	## 注意,这里配置每个zk的data文件夹路径
dataDir=/home/jiangyi/env/zookeeper/zk-cluster/data_zk2
# the port at which the clients will connect
	## 注意,这里每个zk的配置文件端口不能一样
clientPort=2182

./zk3/conf/zoo.cfg

## 与zk1/conf/zoo.cfg一致,只展示修改部分,省略......
	## 注意,这里配置每个zk的data文件夹路径
dataDir=/home/jiangyi/env/zookeeper/zk-cluster/data_zk3
# the port at which the clients will connect
	## 注意,这里每个zk的配置文件端口不能一样
clientPort=2183

七、编写本地集群启动脚本

./start-cluster.sh

#!/bin/sh

./zk1/bin/zkServer.sh start
./zk2/bin/zkServer.sh start
./zk3/bin/zkServer.sh start

echo "zk-server-local-cluster started!"

./stop-cluster.sh

#!/bin/sh

./zk1/bin/zkServer.sh stop
./zk2/bin/zkServer.sh stop
./zk3/bin/zkServer.sh stop

echo "zk-server-local-cluster stopped!"

./show-cluster-status.sh

#!/bin/sh

./zk1/bin/zkServer.sh status
./zk2/bin/zkServer.sh status
./zk3/bin/zkServer.sh status

脚本设置为可运行

chmod 777 ./*.sh

至此,当前目录结构

├── show-cluster-status.sh
├── start-cluster.sh
├── stop-cluster.sh
├── data_zk1
│   └── myid
├── data_zk2
│   └── myid
├── data_zk3
│   └── myid
├── zk1
├── zk2
└── zk3

八、运行启动脚本

启动

./start-cluster.sh

查看状态,出现leader及follower即成功

./show-cluster-status.sh

####
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /home/jiangyi/env/zookeeper/zk-cluster/zk1/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /home/jiangyi/env/zookeeper/zk-cluster/zk2/bin/../conf/zoo.cfg
Client port found: 2182. Client address: localhost. Client SSL: false.
Mode: leader
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /home/jiangyi/env/zookeeper/zk-cluster/zk3/bin/../conf/zoo.cfg
Client port found: 2183. Client address: localhost. Client SSL: false.
Mode: follower