手动搭建k8s-1-10-4之部署flannel网络
文章发布较早,内容可能过时,阅读注意甄别。
flannel 第一次启动时,从 etcd 获取 Pod 网段信息,为本节点分配一个未使用的 /24 段地址,然后创建 flannel.1(也可能是其它名称,如 flannel1 等) 接口。
flannel 将分配的 Pod 网段信息写入 /run/flannel/docker 文件,docker 后续使用这个文件中的环境变量设置 docker0 网桥。
# 1,下载和分发 flanneld 二进制文件
到 https://github.com/coreos/flannel/releases 页面下载最新版本的发布包:
mkdir flannel
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz -C flannel
1
2
3
2
3
分发 flanneld 二进制文件到集群所有节点:
cat > magic.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp flannel/{flanneld,mk-docker-opts.sh} k8s@${node_ip}:/opt/k8s/bin/
ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"
done
EOF
1
2
3
4
5
6
7
8
9
10
2
3
4
5
6
7
8
9
10
# 2,创建 flannel 证书和私钥
flannel 从 etcd 集群存取网段分配信息,而 etcd 集群启用了双向 x509 证书认证,所以需要为 flanneld 生成证书和私钥。
创建证书签名请求:
cat > flanneld-csr.json <<EOF
{
"CN": "flanneld",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
- 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;
生成证书和私钥:
$cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
-ca-key=/etc/kubernetes/cert/ca-key.pem \
-config=/etc/kubernetes/cert/ca-config.json \
-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
$ls flanneld*pem
1
2
3
4
5
2
3
4
5
将生成的证书和私钥分发到所有节点(master 和 worker):
cat > magic.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /etc/flanneld/cert && chown -R k8s /etc/flanneld"
scp flanneld*.pem k8s@${node_ip}:/etc/flanneld/cert
done
EOF
1
2
3
4
5
6
7
8
9
10
2
3
4
5
6
7
8
9
10
# 3,向 etcd 写入集群 Pod 网段信息
注意:本步骤只需执行一次。
$source /opt/k8s/bin/environment.sh
$etcdctl \
--endpoints=${ETCD_ENDPOINTS} \
--ca-file=/etc/kubernetes/cert/ca.pem \
--cert-file=/etc/flanneld/cert/flanneld.pem \
--key-file=/etc/flanneld/cert/flanneld-key.pem \
set ${FLANNEL_ETCD_PREFIX}/config '{"Network":"'${CLUSTER_CIDR}'", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}'
1
2
3
4
5
6
7
2
3
4
5
6
7
- flanneld 当前版本 (v0.10.0) 不支持 etcd v3,故使用 etcd v2 API 写入配置 key 和网段数据;
- 写入的 Pod 网段 ${CLUSTER_CIDR} 必须是 /16 段地址,必须与 kube-controller-manager 的 --cluster-cidr 参数值一致;
# 4,创建 flanneld 的 systemd unit 文件
$source /opt/k8s/bin/environment.sh
$cat > flanneld.service << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
ExecStart=/opt/k8s/bin/flanneld \\
-etcd-cafile=/etc/kubernetes/cert/ca.pem \\
-etcd-certfile=/etc/flanneld/cert/flanneld.pem \\
-etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \\
-etcd-endpoints=${ETCD_ENDPOINTS} \\
-etcd-prefix=${FLANNEL_ETCD_PREFIX} \\
-iface=${VIP_IF}
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
- mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 docker 启动时使用这个文件中的环境变量配置 docker0 网桥;
- flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口,如上面的 eth0 接口;
- flanneld 运行时需要 root 权限;
# 5,分发 flanneld systemd unit 文件到所有节点
cat > magic.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
scp flanneld.service root@${node_ip}:/etc/systemd/system/
done
EOF
1
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
# 6,启动 flanneld 服务
cat > magic.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable flanneld && systemctl start flanneld"
done
EOF
1
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
# 7,检查启动结果
cat > magic.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh k8s@${node_ip} "systemctl status flanneld|grep Active"
done
EOF
1
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
看到如下输出:
$bash magic.sh
>>> 192.168.106.3
Active: active (running) since Fri 2018-11-23 17:11:40 CST; 6h ago
>>> 192.168.106.4
Active: active (running) since Fri 2018-11-23 17:11:40 CST; 6h ago
>>> 192.168.106.5
Active: active (running) since Fri 2018-11-23 17:11:41 CST; 6h ago
1
2
3
4
5
6
7
2
3
4
5
6
7
则说明正常,如果失败,则用如下命令查看日志:
journalctl -ux flanneld
1
# 8,检查分配给各 flanneld 的 Pod 网段信息
查看集群 Pod 网段 (/16):
source /opt/k8s/bin/environment.sh
etcdctl \
--endpoints=${ETCD_ENDPOINTS} \
--ca-file=/etc/kubernetes/cert/ca.pem \
--cert-file=/etc/flanneld/cert/flanneld.pem \
--key-file=/etc/flanneld/cert/flanneld-key.pem \
get ${FLANNEL_ETCD_PREFIX}/config
1
2
3
4
5
6
7
2
3
4
5
6
7
输出:
{"Network":"172.30.0.0/16","SubnetLen": 24, "Backend": {"Type": "vxlan"}}
1
查看已分配的 Pod 子网段列表 (/24):
source /opt/k8s/bin/environment.sh
etcdctl \
--endpoints=${ETCD_ENDPOINTS} \
--ca-file=/etc/kubernetes/cert/ca.pem \
--cert-file=/etc/flanneld/cert/flanneld.pem \
--key-file=/etc/flanneld/cert/flanneld-key.pem \
ls ${FLANNEL_ETCD_PREFIX}/subnets
1
2
3
4
5
6
7
2
3
4
5
6
7
输出:
/kubernetes/network/subnets/172.30.84.0-24
/kubernetes/network/subnets/172.30.8.0-24
/kubernetes/network/subnets/172.30.29.0-24
1
2
3
2
3
查看某一 Pod 网段对应的节点 IP 和 flannel 接口地址:
注意其中的IP段换成自己的。
source /opt/k8s/bin/environment.sh
etcdctl \
--endpoints=${ETCD_ENDPOINTS} \
--ca-file=/etc/kubernetes/cert/ca.pem \
--cert-file=/etc/flanneld/cert/flanneld.pem \
--key-file=/etc/flanneld/cert/flanneld-key.pem \
get ${FLANNEL_ETCD_PREFIX}/subnets/172.30.8.0-24
1
2
3
4
5
6
7
2
3
4
5
6
7
输出:
{"PublicIP":"192.168.106.4","BackendType":"vxlan","BackendData":{"VtepMAC":"f2:14:20:50:4f:af"}}
1
# 9,验证各节点能通过 Pod 网段互通
在各节点上部署 flannel 后,检查是否创建了 flannel 接口 (名称可能为 flannel0、flannel.0、flannel.1 等):
cat > magic.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh ${node_ip} "/usr/sbin/ip addr show flannel.1|grep -w inet"
done
EOF
1
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
输出:
$bash magic.sh
>>> 192.168.106.3
inet 172.30.84.0/32 scope global flannel.1
>>> 192.168.106.4
inet 172.30.8.0/32 scope global flannel.1
>>> 192.168.106.5
inet 172.30.29.0/32 scope global flannel.1
1
2
3
4
5
6
7
2
3
4
5
6
7
在各节点上 ping 所有 flannel 接口 IP,确保能通:
注意其中的IP段换成自己的。
cat > magic.sh << "EOF"
#!/bin/bash
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_IPS[@]}
do
echo ">>> ${node_ip}"
ssh ${node_ip} "ping -c 1 172.30.8.0"
ssh ${node_ip} "ping -c 1 172.30.29.0"
ssh ${node_ip} "ping -c 1 172.30.84.0"
done
EOF
1
2
3
4
5
6
7
8
9
10
11
2
3
4
5
6
7
8
9
10
11
输出:
$bash magic.sh
>>> 192.168.106.3
PING 172.30.8.0 (172.30.8.0) 56(84) bytes of data.
64 bytes from 172.30.8.0: icmp_seq=1 ttl=64 time=0.285 ms
--- 172.30.8.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms
PING 172.30.29.0 (172.30.29.0) 56(84) bytes of data.
64 bytes from 172.30.29.0: icmp_seq=1 ttl=64 time=0.337 ms
--- 172.30.29.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms
PING 172.30.84.0 (172.30.84.0) 56(84) bytes of data.
64 bytes from 172.30.84.0: icmp_seq=1 ttl=64 time=0.062 ms
--- 172.30.84.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
>>> 192.168.106.4
PING 172.30.8.0 (172.30.8.0) 56(84) bytes of data.
64 bytes from 172.30.8.0: icmp_seq=1 ttl=64 time=0.055 ms
--- 172.30.8.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms
PING 172.30.29.0 (172.30.29.0) 56(84) bytes of data.
64 bytes from 172.30.29.0: icmp_seq=1 ttl=64 time=0.311 ms
--- 172.30.29.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms
PING 172.30.84.0 (172.30.84.0) 56(84) bytes of data.
64 bytes from 172.30.84.0: icmp_seq=1 ttl=64 time=0.395 ms
--- 172.30.84.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms
>>> 192.168.106.5
PING 172.30.8.0 (172.30.8.0) 56(84) bytes of data.
64 bytes from 172.30.8.0: icmp_seq=1 ttl=64 time=0.325 ms
--- 172.30.8.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms
PING 172.30.29.0 (172.30.29.0) 56(84) bytes of data.
64 bytes from 172.30.29.0: icmp_seq=1 ttl=64 time=0.060 ms
--- 172.30.29.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms
PING 172.30.84.0 (172.30.84.0) 56(84) bytes of data.
64 bytes from 172.30.84.0: icmp_seq=1 ttl=64 time=0.260 ms
--- 172.30.84.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
上次更新: 2024/11/19, 23:11:42