二丫讲梵 二丫讲梵
首页
  • 最佳实践
  • 迎刃而解
  • Nginx
  • Php
  • Zabbix
  • AWS
  • Prometheus
  • Grafana
  • CentOS
  • Systemd
  • Docker
  • Rancher
  • Ansible
  • Ldap
  • Gitlab
  • GitHub
  • Etcd
  • Consul
  • RabbitMQ
  • Kafka
  • MySql
  • MongoDB
  • OpenVPN
  • KVM
  • VMware
  • Other
  • ELK
  • K8S
  • LLM
  • Nexus
  • Jenkins
  • 随写编年
  • 家人物语
  • 追忆青春
  • 父亲的朋友圈
  • 电影音乐
  • 效率工具
  • 博客相关
  • Shell
  • 前端实践
  • Vue学习笔记
  • Golang学习笔记
  • Golang编程技巧
  • 学习周刊
  • Obsidian插件周刊
关于
友链
  • 本站索引

    • 分类
    • 标签
    • 归档
  • 本站页面

    • 导航
    • 打赏
  • 我的工具

    • 备忘录清单 (opens new window)
    • json2go (opens new window)
    • gopher (opens new window)
    • 微信MD编辑 (opens new window)
    • 国内镜像 (opens new window)
    • 出口IP查询 (opens new window)
    • 代码高亮工具 (opens new window)
  • 外站页面

    • 开往 (opens new window)
    • ldapdoc (opens new window)
    • HowToStartOpenSource (opens new window)
    • vdoing-template (opens new window)
GitHub (opens new window)

二丫讲梵

行者常至,为者常成
首页
  • 最佳实践
  • 迎刃而解
  • Nginx
  • Php
  • Zabbix
  • AWS
  • Prometheus
  • Grafana
  • CentOS
  • Systemd
  • Docker
  • Rancher
  • Ansible
  • Ldap
  • Gitlab
  • GitHub
  • Etcd
  • Consul
  • RabbitMQ
  • Kafka
  • MySql
  • MongoDB
  • OpenVPN
  • KVM
  • VMware
  • Other
  • ELK
  • K8S
  • LLM
  • Nexus
  • Jenkins
  • 随写编年
  • 家人物语
  • 追忆青春
  • 父亲的朋友圈
  • 电影音乐
  • 效率工具
  • 博客相关
  • Shell
  • 前端实践
  • Vue学习笔记
  • Golang学习笔记
  • Golang编程技巧
  • 学习周刊
  • Obsidian插件周刊
关于
友链
  • 本站索引

    • 分类
    • 标签
    • 归档
  • 本站页面

    • 导航
    • 打赏
  • 我的工具

    • 备忘录清单 (opens new window)
    • json2go (opens new window)
    • gopher (opens new window)
    • 微信MD编辑 (opens new window)
    • 国内镜像 (opens new window)
    • 出口IP查询 (opens new window)
    • 代码高亮工具 (opens new window)
  • 外站页面

    • 开往 (opens new window)
    • ldapdoc (opens new window)
    • HowToStartOpenSource (opens new window)
    • vdoing-template (opens new window)
GitHub (opens new window)
  • Nexus系列文章

  • Jenkins系列文章

  • ELK笔记

  • Kubernetes笔记

    • 手动部署kubernetes-1-8-6集群

    • 其他姿势快速部署

      • 使用minikube安装k8s-1-12单机版
      • 使用kubeadm安装k8s-1-10集群
        • 1,初始化。
        • 2,传包进来。
          • 1,使用打好的包安装。
          • 2,使用原始方式安装。
          • 3,启动 docker。
          • 4,导入镜像。
          • 5,配置 kubelet。
          • 6,集群初始化。
          • 7,加入 master。
          • 8,部署 dashboard。
    • 手动搭建k8s-1-10-4高可用集群(推荐版).md

    • 基础学习

    • 从新出发

  • LLM专题

  • 系列专题
  • Kubernetes笔记
  • 其他姿势快速部署
二丫讲梵
2018-12-06
目录

使用kubeadm安装k8s-1-10集群

文章发布较早,内容可能过时,阅读注意甄别。

所有需要的包都已经下载并统一打包,基于这些包,就能够完成部署安装。

文件包已上传百度云,链接如下:

  • 下载地址:https://pan.baidu.com/s/1YPYO9PKpkdxRzsJDGsu5jw

  • 提取码: 9d5h

这里使用两台服务器做集群,一台作为 master,一台作为 node。

# 1,初始化。

两台服务器都做这些初始化操作。

yum -y install wget ntpdate lrzsz curl && ntpdate -u cn.pool.ntp.org
1

关闭 swap。

swapoff -a
sed -i '11s/^/#/g' /etc/fstab  #这个永久关闭根据自己的实际位置操作。
1
2

配置 hosts。

192.168.106.4 master
192.168.106.5 node
1
2

优化参数。

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
1
2
3
4
5

然后加载。

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
1
2

# 2,传包进来。

# 1,使用打好的包安装。

tar xf k8s.rpm.tar.gz
cd k8s.rpm
yum -y install ./*.rpm
1
2
3

这一步是将需要的包都缓存下来然后可以直接安装了,如果按照原有的路数走,那么需要进行如下方式安装。

# 2,使用原始方式安装。

step 1: 安装必要的一些系统工具

  • sudo yum install -y yum-utils device-mapper-persistent-data lvm2
    
    1

Step 2: 添加软件源信息

  • wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo && mv docker-ce.repo /etc/yum.repos.d
    
    1

Step 3: 列举出来有哪些 docker 的版本可以安装

  • yum list docker-ce.x86_64 –showduplicates | sort -r
    
    1

Step4: 指定版本来安装

  • yum -y install docker-ce-[VERSION]
    
    1

这里安装 17 的版本

  • yum -y install docker-ce-17.12.1.ce-1.el7.centos
    
    1

Step5: 安装 kubeadm、kubelet、kubectl

首先配置阿里云源。

  • cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
         http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10

然后安装

  • yum makecache fast && yum install -y kubelet-1.10.0-0 kubeadm-1.10.0-0 kubectl-1.10.0-0
    
    1

# 3,启动 docker。

启动 docker 并加入开机自启动。

systemctl start docker && systemctl enable docker && systemctl status docker
1

# 4,导入镜像。

master上操作:

docker load -i image-master.tar
1

更改一下 tag。

docker tag cnych/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0
docker tag cnych/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0
docker tag cnych/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
docker tag cnych/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
docker tag cnych/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
docker tag cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker tag cnych/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
docker tag cnych/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12
docker tag cnych/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag cnych/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
1
2
3
4
5
6
7
8
9
10

node上操作:

docker load -i image-node.tar
1

更改一下 tag:

docker tag cnych/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag cnych/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
docker tag cnych/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
docker tag cnych/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
docker tag cnych/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
docker tag cnych/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3
docker tag cnych/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2
1
2
3
4
5
6
7

# 5,配置 kubelet。

安装完成后,我们还需要对 kubelet 进行配置,因为用 yum 源的方式安装的 kubelet 生成的配置文件将参数–cgroup-driver 改成了 systemd,而 docker 的 cgroup-driver 是 cgroupfs,这二者必须一致才行,我们可以通过 docker info 命令查看:

[root@localhost ~]$docker info |grep Cgroup
Cgroup Driver: cgroupfs
1
2

修改文件 kubelet 的配置文件 / etc/systemd/system/kubelet.service.d/10-kubeadm.conf,将其中的 KUBELET_CGROUP_ARGS 参数更改成 cgroupfs:

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
1
2
3

修改完成之后,重新加载一下。

systemctl daemon-reload
1

# 6,集群初始化。

到这里我们的准备工作就完成了,接下来我们就可以在 master 节点上用 kubeadm 命令来初始化我们的集群了:

[root@localhost ~]$kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.106.4
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
    [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.106.4]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master] and IPs [192.168.106.4]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 19.502623 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node master as master by adding a label and a taint
[markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: y6829y.a3qvzgz8vm0horcr
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.106.4:6443 --token y6829y.a3qvzgz8vm0horcr --discovery-token-ca-cert-hash sha256:1c63e983c5319dc9b59a61133ad6e7941886125a603aa888f28547afc50a41ba
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62

根据人家提示的执行如下指令:

mkdir -p $HOME/.kube && sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config
1

特别要注意最后的那个输出,以后 node 节点加入进来都是靠的这条指令。

此时可以在本机查看节点状态:

[root@localhost ~]$kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
master    NotReady      master    2m        v1.10.0
1
2
3

那么:

[root@localhost ~]$kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io "flannel" created
clusterrolebinding.rbac.authorization.k8s.io "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset.extensions "kube-flannel-ds-amd64" created
daemonset.extensions "kube-flannel-ds-arm64" created
daemonset.extensions "kube-flannel-ds-arm" created
daemonset.extensions "kube-flannel-ds-ppc64le" created
daemonset.extensions "kube-flannel-ds-s390x" created
1
2
3
4
5
6
7
8
9
10

等一会儿再查看一下。

[root@localhost ~]$kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
master    Ready      master    2m        v1.10.0
1
2
3

# 7,加入 master。

在 node 节点执行刚刚记录下来的指令:

kubeadm join 192.168.106.4:6443 --token y6829y.a3qvzgz8vm0horcr --discovery-token-ca-cert-hash sha256:1c63e983c5319dc9b59a61133ad6e7941886125a603aa888f28547afc50a41ba
1

等下在 master 查看状态:

[root@localhost ~]$kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    3m        v1.10.0
node      Ready     <none>    1m        v1.10.0
1
2
3
4

# 8,部署 dashboard。

添加两个文件。

dashboard.yaml:

[root@localhost dashboard]$cat dashboard.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: kubernetes-dashboard
      containers:
      - name: kubernetes-dashboard
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 9090
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65

dashboard-svc.yaml:

[root@localhost dashboard]$cat dashboard-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort
  ports:
  - port: 9090
    targetPort: 9090
    nodePort: 30033
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

然后通过指令创建:

[root@localhost dashboard]$kubectl create -f dashboard.yaml
serviceaccount "kubernetes-dashboard" created

[root@localhost dashboard]$kubectl create -f dashboard-svc.yaml
service "kubernetes-dashboard" created
1
2
3
4
5

然后拿上任一一个节点的 ip:30033 进行访问即可看到 ui 界面了。

  • 本文整理参考自:
    • https://www.jianshu.com/p/9c2e6733ef9b
    • https://www.qikqiak.com/post/manual-install-high-available-kubernetes-cluster/#10-%E9%83%A8%E7%BD%B2dashboard-%E6%8F%92%E4%BB%B6-a-id-dashboard-a (opens new window)
微信 支付宝
上次更新: 2024/06/13, 22:13:45
使用minikube安装k8s-1-12单机版
手动搭建k8s-1-10-4高可用集群(前言以及准备)

← 使用minikube安装k8s-1-12单机版 手动搭建k8s-1-10-4高可用集群(前言以及准备)→

最近更新
01
记录二五年五一之短暂回归家庭
05-09
02
学习周刊-总第210期-2025年第19周
05-09
03
学习周刊-总第209期-2025年第18周
05-03
更多文章>
Theme by Vdoing | Copyright © 2017-2025 | 点击查看十年之约 | 浙ICP备18057030号
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式