二丫讲梵 二丫讲梵
首页
  • 最佳实践
  • 迎刃而解
  • Nginx
  • Php
  • Zabbix
  • AWS
  • Prometheus
  • Grafana
  • CentOS
  • Systemd
  • Docker
  • Rancher
  • Ansible
  • Ldap
  • Gitlab
  • GitHub
  • Etcd
  • Consul
  • RabbitMQ
  • Kafka
  • MySql
  • MongoDB
  • OpenVPN
  • KVM
  • VMware
  • Other
  • ELK
  • K8S
  • LLM
  • Nexus
  • Jenkins
  • 随写编年
  • 家人物语
  • 追忆青春
  • 父亲的朋友圈
  • 电影音乐
  • 效率工具
  • 博客相关
  • Shell
  • 前端实践
  • Vue学习笔记
  • Golang学习笔记
  • Golang编程技巧
  • 学习周刊
  • Obsidian插件周刊
关于
友链
  • 本站索引

    • 分类
    • 标签
    • 归档
  • 本站页面

    • 导航
    • 打赏
  • 我的工具

    • 备忘录清单 (opens new window)
    • json2go (opens new window)
    • gopher (opens new window)
    • 微信MD编辑 (opens new window)
    • 国内镜像 (opens new window)
    • 出口IP查询 (opens new window)
    • 代码高亮工具 (opens new window)
  • 外站页面

    • 开往 (opens new window)
    • ldapdoc (opens new window)
    • HowToStartOpenSource (opens new window)
    • vdoing-template (opens new window)
GitHub (opens new window)

二丫讲梵

行者常至,为者常成
首页
  • 最佳实践
  • 迎刃而解
  • Nginx
  • Php
  • Zabbix
  • AWS
  • Prometheus
  • Grafana
  • CentOS
  • Systemd
  • Docker
  • Rancher
  • Ansible
  • Ldap
  • Gitlab
  • GitHub
  • Etcd
  • Consul
  • RabbitMQ
  • Kafka
  • MySql
  • MongoDB
  • OpenVPN
  • KVM
  • VMware
  • Other
  • ELK
  • K8S
  • LLM
  • Nexus
  • Jenkins
  • 随写编年
  • 家人物语
  • 追忆青春
  • 父亲的朋友圈
  • 电影音乐
  • 效率工具
  • 博客相关
  • Shell
  • 前端实践
  • Vue学习笔记
  • Golang学习笔记
  • Golang编程技巧
  • 学习周刊
  • Obsidian插件周刊
关于
友链
  • 本站索引

    • 分类
    • 标签
    • 归档
  • 本站页面

    • 导航
    • 打赏
  • 我的工具

    • 备忘录清单 (opens new window)
    • json2go (opens new window)
    • gopher (opens new window)
    • 微信MD编辑 (opens new window)
    • 国内镜像 (opens new window)
    • 出口IP查询 (opens new window)
    • 代码高亮工具 (opens new window)
  • 外站页面

    • 开往 (opens new window)
    • ldapdoc (opens new window)
    • HowToStartOpenSource (opens new window)
    • vdoing-template (opens new window)
GitHub (opens new window)
  • Nexus系列文章

  • Jenkins系列文章

  • ELK笔记

  • Kubernetes笔记

    • 手动部署kubernetes-1-8-6集群

      • 基础环境-kubernetes-1-8-6集群搭建
      • 创建证书-kubernetes-1-8-6集群搭建
      • Etcd集群-kubernetes-1-8-6集群搭建
      • 安装kubectl-kubernetes-1-8-6集群搭建
      • 部署master-kubernetes-1-8-6集群搭建
      • 部署node-kubernetes-1-8-6集群搭建
      • 部署web-ui-kubernetes-1-8-6集群搭建
        • 1,安装 DNS 插件
        • 2,部署 dashboard 插件
          • 1,文件一。
          • 2,文件二。
        • 3,分别执行 2 个部署文件进行创建。
        • 4,如何访问?
        • 5,部署 heapster 插件
          • 1,下载安装文件
          • 2,部署失败。
          • 3,下载镜像。
          • 4,保存镜像。
          • 5,加载镜像。
          • 6,验证效果。
        • 6,另外几个插件。
    • 其他姿势快速部署

    • 手动搭建k8s-1-10-4高可用集群(推荐版).md

    • 基础学习

    • 从新出发

  • LLM专题

  • 系列专题
  • Kubernetes笔记
  • 手动部署kubernetes-1-8-6集群
二丫讲梵
2018-10-06
目录

部署web-ui-kubernetes-1-8-6集群搭建

文章发布较早,内容可能过时,阅读注意甄别。

# 1,安装 DNS 插件

在 Master 节点 上进行安装操作。

下载安装文件:

# cd
# wget https://github.com/kubernetes/kubernetes/releases/download/v1.8.6/kubernetes.tar.gz
# tar xzvf kubernetes.tar.gz

# cd /root/kubernetes/cluster/addons/dns
# mv  kubedns-svc.yaml.sed kubedns-svc.yaml

#把文件中$DNS_SERVER_IP替换成10.254.0.2
# sed -i 's/$DNS_SERVER_IP/10.254.0.2/g' ./kubedns-svc.yaml

# mv ./kubedns-controller.yaml.sed ./kubedns-controller.yaml

#把$DNS_DOMAIN替换成cluster.local
# sed -i 's/$DNS_DOMAIN/cluster.local/g' ./kubedns-controller.yaml

# ls *.yaml
kubedns-cm.yaml  kubedns-controller.yaml  kubedns-sa.yaml  kubedns-svc.yaml

[root@master1 dns]# kubectl create -f .
configmap "kube-dns" created
deployment "kube-dns" created
serviceaccount "kube-dns" created
service "kube-dns" created
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

# 2,部署 dashboard 插件

在 master 节点部署

新增部署配置文件

需要 2 个文件。

# 1,文件一。

dashboard.yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
    spec:
      serviceAccountName: kubernetes-dashboard
      containers:
        - name: kubernetes-dashboard
          image: registry.docker-cn.com/kubernetesdashboarddev/kubernetes-dashboard-amd64:head
          resources:
            limits:
              cpu: 100m
              memory: 300Mi
            requests:
              cpu: 100m
              memory: 100Mi
          ports:
            - containerPort: 9090
          livenessProbe:
            httpGet:
              path: /
              port: 9090
            initialDelaySeconds: 30
            timeoutSeconds: 30
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63

# 2,文件二。

dashboard-svc.yaml:

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort
  ports:
    - port: 9090
      targetPort: 9090
      nodePort: 8888
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

# 3,分别执行 2 个部署文件进行创建。

kubectl create -f dashboard.yaml

kubectl create -f dashboard-svc.yaml
1
2
3

# 4,如何访问?

[root@localhost ~]$kubectl get pod -n kube-system -o wide
NAME                                    READY     STATUS              RESTARTS   AGEIP            NODE
kube-dns-778977457c-2rwdj               0/3       ErrImagePull        0          1m172.30.24.2   192.168.106.5
kubernetes-dashboard-8665cd4dfb-jzxg6   0/1       ContainerCreating   0          26s<none>        192.168.106.4
1
2
3
4

首先查看一下部署的服务是否正常?

如上可以很清晰的看到 咱们部署的 kubernetes-dashboard 在 192.168.106.4 上面已经部署了。

那我们可以去访问下:

http://192.168.106.4:8888 (opens new window)

# 5,部署 heapster 插件

# 1,下载安装文件

# wget https://github.com/kubernetes/heapster/archive/v1.5.0.tar.gz

# tar xzvf ./v1.5.0.tar.gz

# cd ./heapster-1.5.0/

[root@master1 heapster-1.5.0]# kubectl create -f deploy/kube-config/influxdb/
deployment "monitoring-grafana" created
service "monitoring-grafana" created
serviceaccount "heapster" created
deployment "heapster" created
service "heapster" created
deployment "monitoring-influxdb" created
service "monitoring-influxdb" created

[root@master1 heapster-1.5.0]#  kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml
clusterrolebinding "heapster" created
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

# 2,部署失败。

但是这个地方在部署的时候失败了,我们可以通过命令行来查看 pod 的状态。

[root@master ~]$kubectl get pod -n kube-system -o wide
NAME                                    READY     STATUS             RESTARTS   AGE       IP             NODE
heapster-55c5d9c56b-wh88h               0/1       ImagePullBackOff(镜像问题)   0          1d        172.30.101.2   192.168.106.4
helloworldapi-6ff9f5b6db-hstbr          0/1       ImagePullBackOff   0          4h        172.30.24.4    192.168.106.5
helloworldapi-6ff9f5b6db-qtg85          1/1       Running            0          4h        172.30.101.5   192.168.106.4
kube-dns-778977457c-2rwdj               0/3       ErrImagePull       0          1d        172.30.24.2    192.168.106.5
kubernetes-dashboard-8665cd4dfb-jzxg6   1/1       Running            1          1d        172.30.101.4   192.168.106.4
monitoring-grafana-5bccc9f786-2tqsn     0/1       ImagePullBackOff   0          1d        172.30.24.3    192.168.106.5
monitoring-influxdb-85cb4985d4-q8dp6    0/1       ImagePullBackOff   0          1d        172.30.101.3   192.168.106.4
1
2
3
4
5
6
7
8
9

同时失败的信息也可以在 k8s 的 web 界面看到:

image

可以看到报的问题是拉取镜像失败,哈哈哈,被墙了。

此时可以先在一个能够访问的主机上将镜像下载下来,然后再进行加载。

# 3,下载镜像。

配置代理请走如下方向:http://doc.eryajf.net/docs/test/test-1am8o26r6csm9

然后在对应主机上下载对应镜像:

[root@localhost ~]$docker pull gcr.io/google_containers/heapster-amd64:v1.4.2

Trying to pull repository gcr.io/google_containers/heapster-amd64 ...
v1.4.2: Pulling from gcr.io/google_containers/heapster-amd64
4e4eeec2f874: Pull complete
5c479041def4: Pull complete
Digest: sha256:f58ded16b56884eeb73b1ba256bcc489714570bacdeca43d4ba3b91ef9897b20
Status: Downloaded newer image for gcr.io/google_containers/heapster-amd64:v1.4.2
1
2
3
4
5
6
7
8

# 4,保存镜像。

下载之后保存镜像。

查看镜像
[root@localhost ~]$docker images

REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/heapster-amd64   v1.4.2              d4e02f5922ca        13 months ago       73.4 MB

保存镜像
[root@localhost ~]$docker save gcr.io/google_containers/heapster-amd64 -o heapster-amd64
1
2
3
4
5
6
7
8

# 5,加载镜像。

然后把镜像传到 k8s 的 master 上,进行加载。

传输
[root@localhost ~]$scp heapster-amd64 root@192.168.106.3:/mnt/

加载
[root@master mnt]$cat heapster-amd64 | ssh root@node01 'docker load'
Loaded image: gcr.io/google_containers/heapster-amd64:v1.4.2
1
2
3
4
5
6

接着去对应的 node 主机上看看镜像:

[root@node01 ~]$docker images
REPOSITORY                                                                 TAG                 IMAGE ID            CREATED             SIZE
registry.docker-cn.com/kubernetesdashboarddev/kubernetes-dashboard-amd64   head                cf8ed7dd44bb        4 days ago          122MB
192.168.106.3:5000/centos                                                  v0323               5182e96772bf        7 weeks ago         200MB
192.168.106.3:5000/helloworldapi                                           v2.2                85378ce05520        6 months ago        329MB
registry.access.redhat.com/rhel7/pod-infrastructure                        latest              99965fb98423        11 months ago       209MB
gcr.io/google_containers/heapster-amd64                                    v1.4.2              d4e02f5922ca        13 months ago       73.4MB
1
2
3
4
5
6
7

# 6,验证效果。

当镜像正常加载之后,k8s 的 kube-controller-manage 会自动 restart 容器的。

先在主机上看:

[root@master mnt]$kubectl get pod -n kube-system -o wide
NAME                                    READY     STATUS             RESTARTS   AGE       IP             NODE
heapster-55c5d9c56b-wh88h               1/1       Running(跑起来了)            0          1d        172.30.101.2   192.168.106.4
helloworldapi-6ff9f5b6db-hstbr          0/1       ImagePullBackOff   0          5h        172.30.24.4    192.168.106.5
helloworldapi-6ff9f5b6db-qtg85          1/1       Running            0          5h        172.30.101.5   192.168.106.4
kube-dns-778977457c-2rwdj               0/3       ImagePullBackOff   0          1d        172.30.24.2    192.168.106.5
kubernetes-dashboard-8665cd4dfb-jzxg6   1/1       Running            1          1d        172.30.101.4   192.168.106.4
monitoring-grafana-5bccc9f786-2tqsn     0/1       ImagePullBackOff   0          1d        172.30.24.3    192.168.106.5
monitoring-influxdb-85cb4985d4-q8dp6    0/1       ImagePullBackOff   0          1d        172.30.101.3   192.168.106.4
1
2
3
4
5
6
7
8
9

然后去 web 界面看看:

image

再等一会儿,给他一个加载搜集的时间,即可看到监控了。

一会儿过去了,,,

查看总览:

image

查看节点:

image

节点详情:

image

# 6,另外几个插件。

当我查看创建的 pod 的时候,发现除了 dashboard 成了之后,其余的都失败了,那么接下来,就如上边方法炮制,下载对应镜像,对之进行整理。

这里为了快速下载镜像,可以选择将 google 的镜像换成阿里云的。

gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.1
docker pull registry.docker-cn.com/kubernetesdashboarddev/kubernetes-dashboard-amd64:head


gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5
docker pull registry.cn-hangzhou.aliyuncs.com/wonders/k8s-dns-kube-dns-amd64:1.14.5


gcr.io/google_containers/heapster-grafana-amd64:v4.4.3
docker pull registry.cn-hangzhou.aliyuncs.com/inspur_research/heapster-grafana-amd64:v4.4.3


gcr.io/google_containers/heapster-amd64:v1.4.2
docker pull registry.cn-hangzhou.aliyuncs.com/inspur_research/heapster-amd64:v1.4.2


gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3
docker pull registry.cn-hangzhou.aliyuncs.com/inspur_research/heapster-influxdb-amd64:v1.3.3


gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
docker pull registry.cn-hangzhou.aliyuncs.com/inspur_research/k8s-dns-dnsmasq-nanny-amd64:1.14.5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

网友贡献:

hub.c.163.com/zhijiansd/k8s-dns-kube-dns-amd64:1.14.5

hub.c.163.com/zhijiansd/k8s-dns-dnsmasq-nanny-amd64:1.14.5

hub.c.163.com/zhijiansd/k8s-dns-sidecar-amd64:1.14.5
1
2
3
4
5

或者使用阿里的镜像源:

https://dev.aliyun.com/search.html
1

下载之后,如法炮制,可以看到:

image

但是新的问题来了,因为这个地方使用阿里云的仓库地址 pull 的镜像,最后发现部署仍旧是失败的。

ok,接着再来问题解决大法。

去服务器看看 pod 情况。

[root@master mnt]$kubectl get pods
No resources found.
1
2

什么鬼,怎么会没数据呢。这个地方已经不能这么看了,因为部署的 pod 都是在创建的 namespace 之中的,所以以后要指定 namespace 才能查询啦。

[root@master mnt]$kubectl get pods -n kube-system
NAME                                    READY     STATUS             RESTARTS   AGE
heapster-55c5d9c56b-wh88h               1/1       Running            0          1d
helloworldapi-6ff9f5b6db-hstbr          0/1       ErrImagePull       0          6h
helloworldapi-6ff9f5b6db-qtg85          1/1       Running            0          6h
kube-dns-778977457c-2rwdj               0/3       CrashLoopBackOff   6          1d
kubernetes-dashboard-8665cd4dfb-jzxg6   1/1       Running            1          1d
monitoring-grafana-5bccc9f786-2tqsn     1/1       Running            0          1d
monitoring-influxdb-85cb4985d4-q8dp6    1/1       Running            0          1d
1
2
3
4
5
6
7
8
9

看看详细日志:

[root@master mnt]$kubectl -n kube-system describe pods kube-dns-778977457c-2rwdj
Name:           kube-dns-778977457c-2rwdj
Namespace:      kube-system
Node:           192.168.106.5/192.168.106.5
Start Time:     Sat, 29 Sep 2018 11:44:34 +0800
Labels:         k8s-app=kube-dns
                pod-template-hash=3345330137
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kube-dns-778977457c","uid":"fa63df41-c399-11e8-823b-525400c7...
                scheduler.alpha.kubernetes.io/critical-pod=
Status:         Pending
IP:             172.30.24.2
Created By:     ReplicaSet/kube-dns-778977457c
Controlled By:  ReplicaSet/kube-dns-778977457c
Containers:
  kubedns:
    Container ID:
    Image:         gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5
    Image ID:
    Ports:         10053/UDP, 10053/TCP, 10055/TCP
    Args:
      --domain=cluster.local.
      --dns-port=10053
      --config-dir=/kube-dns-config
      --v=2
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:      100m
      memory:   70Mi
    Liveness:   http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:  http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
    Environment:
      PROMETHEUS_PORT:  10055
    Mounts:
      /kube-dns-config from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-vsmtl (ro)
  dnsmasq:
    Container ID:  docker://61e6ea7f3c478c0d925b221ca31b803ffeb023c1a5bc8a391fe490f63bffa971
    Image:         gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
    Image ID:      docker://sha256:96c7a868215290b21a9600a46ca2f80a7ceaa49ec6811ca2be7da6fcd8aaac77
    Ports:         53/UDP, 53/TCP
    Args:
      -v=2
      -logtostderr
      -configDir=/etc/k8s/dns/dnsmasq-nanny
      -restartDnsmasq=true
      --
      -k
      --cache-size=1000
      --log-facility=-
      --server=/cluster.local/127.0.0.1#10053
      --server=/in-addr.arpa/127.0.0.1#10053
      --server=/ip6.arpa/127.0.0.1#10053
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Sun, 30 Sep 2018 15:49:59 +0800
      Finished:     Sun, 30 Sep 2018 15:51:59 +0800
    Ready:          False
    Restart Count:  6
    Requests:
      cpu:        150m
      memory:     20Mi
    Liveness:     http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-vsmtl (ro)
  sidecar:
    Container ID:
    Image:         gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5
    Image ID:
    Port:          10054/TCP
    Args:
      --v=2
      --logtostderr
      --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
      --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
    State:          Waiting
      Reason:       ErrImagePull
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     20Mi
    Liveness:     http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-vsmtl (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  kube-dns-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-dns
    Optional:  true
  kube-dns-token-vsmtl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-dns-token-vsmtl
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
Events:
  Type     Reason      Age                  From                    Message
  ----     ------      ----                 ----                    -------
  Warning  FailedSync  38m (x6369 over 1d)  kubelet, 192.168.106.5  Error syncing pod
  Warning  Failed      33m (x313 over 1d)   kubelet, 192.168.106.5  Failed to pull image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed      28m (x314 over 1d)   kubelet, 192.168.106.5  Failed to pull image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5": rpc error: code = Unknown desc = Errorresponse from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Normal   BackOff     18m (x6135 over 1d)  kubelet, 192.168.106.5  Back-off pulling image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5"
  Warning  Unhealthy   8m (x14 over 14m)    kubelet, 192.168.106.5  Liveness probe failed: Get http://172.30.24.2:10054/healthcheck/dnsmasq: dial tcp 172.30.24.2:10054: getsockopt: connection refused
  Normal   Killing     3m (x6 over 13m)     kubelet, 192.168.106.5  Killing container with id docker://dnsmasq:Container failed liveness probe.. Container will be killed and recreated.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121

看最后的信息,大概说的是依然找不到对应的镜像,我们去 192.168.106.5 这台主机上看看是否有这个镜像。

[root@node02 log]$docker images
REPOSITORY                                                                      TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/heapster-grafana-amd64                                 v4.4.3              edd7018a59de        8 months ago        152MB
registry.cn-hangzhou.aliyuncs.com/inspur_research/heapster-grafana-amd64        v4.4.3              edd7018a59de        8 months ago        152MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64                            1.14.5              96c7a8682152        8 months ago        41.4MB
registry.cn-hangzhou.aliyuncs.com/inspur_research/k8s-dns-dnsmasq-nanny-amd64   1.14.5              96c7a8682152        8 months ago        41.4MB
registry.cn-hangzhou.aliyuncs.com/wonders/k8s-dns-kube-dns-amd64                1.14.5              38844359ff14        9 months ago        49.4MB
registry.access.redhat.com/rhel7/pod-infrastructure                             latest              99965fb98423        11 months ago       209MB
1
2
3
4
5
6
7
8

话说第五个不就是么,但是注意,这个地方因为用阿里云仓库下载,所以 tag 名称与人家报出来的不一致,因此导致失败。解决办法就是修改对应的 tag。

注意修改的时候带上后边的版本号。

[root@node02 log]$docker tag registry.cn-hangzhou.aliyuncs.com/wonders/k8s-dns-kube-dns-amd64:1.14.5 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
[root@node02 log]$
[root@node02 log]$docker images
REPOSITORY                                                                      TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/heapster-grafana-amd64                                 v4.4.3              edd7018a59de        8 months ago        152MB
registry.cn-hangzhou.aliyuncs.com/inspur_research/heapster-grafana-amd64        v4.4.3              edd7018a59de        8 months ago        152MB
registry.cn-hangzhou.aliyuncs.com/inspur_research/k8s-dns-dnsmasq-nanny-amd64   1.14.5              96c7a8682152        8 months ago        41.4MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64                            1.14.5              38844359ff14        9 months ago        49.4MB
registry.cn-hangzhou.aliyuncs.com/wonders/k8s-dns-kube-dns-amd64                1.14.5              38844359ff14        9 months ago        49.4MB
registry.access.redhat.com/rhel7/pod-infrastructure                             latest              99965fb98423        11 months ago       209MB
1
2
3
4
5
6
7
8
9
10
11
12

改完之后就可以去 master 看看 pod 的状态了。

这个时候的日志,也可以通过 node 节点的 / var/log/messages 进行监控。

那么到此刻,一个所谓的安装流程算是到此完毕。事实上这只是一小小小步而已,经过了这遍安装也只是熟悉了 k8s 所需要的组件有哪些,至于其中各个组件分别代表什么,又分别有什么功能,之间的协作是怎样的,k8s 的实际应用又是那些,这都是需要深入的地方。

微信 支付宝
上次更新: 2024/07/04, 22:40:37
部署node-kubernetes-1-8-6集群搭建
使用minikube安装k8s-1-12单机版

← 部署node-kubernetes-1-8-6集群搭建 使用minikube安装k8s-1-12单机版→

最近更新
01
学习周刊-总第216期-2025年第25周
06-20
02
睡着的人不关灯
06-12
03
学习周刊-总第215期-2025年第24周
06-12
更多文章>
Theme by Vdoing | Copyright © 2017-2025 | 点击查看十年之约 | 浙ICP备18057030号
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式