手动搭建k8s-1-10-4之部署Etcd集群
文章发布较早,内容可能过时,阅读注意甄别。
本文档介绍部署一个三个节点高可用 etcd 集群的步骤:
- 下载和分发 etcd 二进制文件;
- 创建 etcd 集群各节点的 x509 证书,用于加密客户端 (如 etcdctl) 与 etcd 集群、etcd 集群之间的数据流;
- 创建 etcd 的 systemd unit 文件,配置服务参数;
- 检查集群工作状态;
# 1,下载和分发 etcd 二进制文件
到 https://github.com/coreos/etcd/releases 页面下载最新版本的发布包:
wget https://github.com/coreos/etcd/releases/download/v3.3.7/etcd-v3.3.7-linux-amd64.tar.gz
tar -xvf etcd-v3.3.7-linux-amd64.tar.gz
1
2
2
分发二进制文件到集群所有节点:
cat > magic.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp etcd-v3.3.7-linux-amd64/etcd* k8s@${node_ip}:/opt/k8s/bin
ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
done
EOF
1
2
3
4
5
6
7
8
9
10
2
3
4
5
6
7
8
9
10
# 2,创建 etcd 证书和私钥
创建证书签名请求:
cat > etcd-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.106.3",
"192.168.106.4",
"192.168.106.5"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
- hosts 字段指定授权使用该证书的 etcd 节点 IP 或域名列表,这里将 etcd 集群的三个节点 IP 都列在其中
生成证书和私钥:
$cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
-ca-key=/etc/kubernetes/cert/ca-key.pem \
-config=/etc/kubernetes/cert/ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
$ls etcd*
1
2
3
4
5
2
3
4
5
分发生成的证书和私钥到各 etcd 节点:
cat > magic.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /etc/etcd/cert && chown -R k8s /etc/etcd/cert"
scp etcd*.pem k8s@${node_ip}:/etc/etcd/cert/
done
EOF
1
2
3
4
5
6
7
8
9
10
2
3
4
5
6
7
8
9
10
# 3,创建 etcd 的 systemd unit 模板文件
$source /opt/k8s/bin/environment.sh
$cat > etcd.service.template <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
User=k8s
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/opt/k8s/bin/etcd \\
--data-dir=/var/lib/etcd \\
--name=##NODE_NAME## \\
--cert-file=/etc/etcd/cert/etcd.pem \\
--key-file=/etc/etcd/cert/etcd-key.pem \\
--trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
--peer-cert-file=/etc/etcd/cert/etcd.pem \\
--peer-key-file=/etc/etcd/cert/etcd-key.pem \\
--peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--listen-peer-urls=https://##NODE_IP##:2380 \\
--initial-advertise-peer-urls=https://##NODE_IP##:2380 \\
--listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379 \\
--advertise-client-urls=https://##NODE_IP##:2379 \\
--initial-cluster-token=etcd-cluster-0 \\
--initial-cluster=${ETCD_NODES} \\
--initial-cluster-state=new
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
User:
指定以 k8s 账户运行;WorkingDirectory、--data-dir:
指定工作目录和数据目录为 /var/lib/etcd,需在启动服务前创建这个目录;--name:
指定节点名称,当 –initial-cluster-state 值为 new 时,–name 的参数值必须位于 –initial-cluster 列表中;--cert-file、--key-file:
etcd server 与 client 通信时使用的证书和私钥;--trusted-ca-file:
签名 client 证书的 CA 证书,用于验证 client 证书;--peer-cert-file、--peer-key-file:
etcd 与 peer 通信使用的证书和私钥;--peer-trusted-ca-file:
签名 peer 证书的 CA 证书,用于验证 peer 证书;
# 4,将刚刚的文件初始化成节点可用
替换模板文件中的变量,为各节点创建 systemd unit 文件:
cat > magic.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
do
sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service
done
EOF
$bash magic.sh
$ls *.service
etcd-192.168.106.3.service etcd-192.168.106.4.service etcd-192.168.106.5.service
1
2
3
4
5
6
7
8
9
10
11
2
3
4
5
6
7
8
9
10
11
- NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP;
分发生成的 systemd unit 文件:
cat > magic.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /var/lib/etcd && chown -R k8s /var/lib/etcd"
scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service
done
EOF
1
2
3
4
5
6
7
8
9
10
2
3
4
5
6
7
8
9
10
- 如果前边没有 etcd 数据目录和工作目录,这里需要先创建。
- 文件重命名为 etcd.service。
# 5,启动 etcd 服务
cat > magic.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl start etcd &"
done
EOF
1
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
- etcd 进程首次启动时会等待其它节点的 etcd 加入集群,命令 systemctl start etcd 会卡住一段时间,为正常现象。
# 6,检查启动结果
cat > magic.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh k8s@${node_ip} "systemctl status etcd|grep Active"
done
EOF
1
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
结果如下,则说明正常:
$bash magic.sh
>>> 192.168.106.3
Active: active (running) since Fri 2018-11-23 16:56:13 CST; 6h ago
>>> 192.168.106.4
Active: active (running) since Fri 2018-11-23 16:56:13 CST; 6h ago
>>> 192.168.106.5
Active: active (running) since Fri 2018-11-23 16:56:20 CST; 6h ago
1
2
3
4
5
6
7
2
3
4
5
6
7
否则查看日志,确认原因:
$ journalctl -xu etcd
1
# 7,验证服务状态
部署完 etcd 集群后,在任一 etc 节点上执行如下命令:
cat > magic.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
--endpoints=https://${node_ip}:2379 \
--cacert=/etc/kubernetes/cert/ca.pem \
--cert=/etc/etcd/cert/etcd.pem \
--key=/etc/etcd/cert/etcd-key.pem endpoint health
done
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
2
3
4
5
6
7
8
9
10
11
12
13
结果如下,则说明正常:
$bash magic.sh
>>> 192.168.106.3
https://192.168.106.3:2379 is healthy: successfully committed proposal: took = 1.899188ms
>>> 192.168.106.4
https://192.168.106.4:2379 is healthy: successfully committed proposal: took = 1.653305ms
>>> 192.168.106.5
https://192.168.106.5:2379 is healthy: successfully committed proposal: took = 1.811279ms
1
2
3
4
5
6
7
2
3
4
5
6
7
输出均为 healthy 时表示集群服务正常。
上次更新: 2024/06/13, 22:13:45
- 01
- 敬告本站拷贝者,立即停止侵权行为10-29