因为环境不干净导致的kubeadm部署失败,及解决办法

先说解决办法,转自:

https://github.com/kubernetes/kubeadm/issues/1092

commented on Sep 12, 2018

$ kubeadm reset
$ ifconfig cni0 down && ip link delete cni0
$ ifconfig flannel.1 down && ip link delete flannel.1
$ rm -rf /var/lib/cni/

good luck!

正常部署时一堆报错:

[root@server3 yum.repos.d]# kubeadm init –apiserver-advertise-address 176.204.66.113 –pod-network-cidr=10.244.0.0/16
I0521 16:38:11.115087 12754 version.go:236] remote version is much newer: v1.14.2; falling back to: stable-1.12
[init] using Kubernetes version: v1.12.8
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[preflight] Activating the kubelet service
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [server3 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [server3 localhost] and IPs [176.204.66.103 127.0.0.1 ::1]
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [server3 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 176.204.66.103]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in “/etc/kubernetes/pki”
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf”
[controlplane] wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 25.502697 seconds
[uploadconfig] storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.12” in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node server3 as master by adding the label “node-role.kubernetes.io/master=””
[markmaster] Marking the node server3 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition
[root@server3 yum.repos.d]# kubeadm init –apiserver-advertise-address 176.204.66.113 –pod-network-cidr=10.244.0.0/16
I0521 16:41:09.835342 14174 version.go:236] remote version is much newer: v1.14.2; falling back to: stable-1.12
[init] using Kubernetes version: v1.12.8
[preflight] running pre-flight checks
[preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10251]: Port 10251 is in use
[ERROR Port-10252]: Port 10252 is in use
[ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR DirAvailable–var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`

查报错也没用,应该是差错了方向

[root@server3 yum.repos.d]# journalctl -xe
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.447307 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.547518 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:39 server3 kubelet[12986]: I0521 16:41:39.604606 12986 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 21 16:41:39 server3 kubelet[12986]: I0521 16:41:39.608043 12986 kubelet_node_status.go:72] Attempting to register node server3
May 21 16:41:39 server3 dockerd-current[30957]: E0521 08:41:39.608959 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.609473 12986 kubelet_node_status.go:94] Unable to register node “server3” with API server: Unauthorized
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.647690 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.747912 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:39 server3 dockerd-current[30957]: E0521 08:41:39.803911 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.804379 12986 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.848169 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.948407 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 dockerd-current[30957]: E0521 08:41:40.003894 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.004335 12986 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Unauthorized
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.048673 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.148920 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 dockerd-current[30957]: E0521 08:41:40.203930 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.204444 12986 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorized
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.249130 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.349343 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 dockerd-current[30957]: E0521 08:41:40.403983 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.404516 12986 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Unauthorized
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.449487 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.549709 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 dockerd-current[30957]: E0521 08:41:40.603871 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.604379 12986 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.649934 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.750133 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 dockerd-current[30957]: E0521 08:41:40.805437 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.805909 12986 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.850402 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.950626 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:41 server3 dockerd-current[30957]: E0521 08:41:41.005300 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:41 server3 kubelet[12986]: E0521 16:41:41.005753 12986 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Unauthorized
May 21 16:41:41 server3 kubelet[12986]: E0521 16:41:41.050825 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:41 server3 polkitd[5794]: Registered Authentication Agent for unix-process:14438:1872848 (system bus name :1.280 [/usr/bin/pkttyagent –notify-fd 5 –fallback], object path /org/freedesktop/PolicyKi
May 21 16:41:41 server3 systemd[1]: Stopping kubelet: The Kubernetes Node Agent…
— Subject: Unit kubelet.service has begun shutting down
— Defined-By: systemd
— Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

— Unit kubelet.service has begun shutting down.
May 21 16:41:41 server3 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
— Subject: Unit kubelet.service has finished shutting down
— Defined-By: systemd
— Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

— Unit kubelet.service has finished shutting down.
May 21 16:41:41 server3 polkitd[5794]: Unregistered Authentication Agent for unix-process:14438:1872848 (system bus name :1.280, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (

网上的资料更是不知所云,最后google去搜了一下,终于找到解决方案,其实也很简单,就是kubeadm本来就可以清干净环境,运行一下

[root@server3 yum.repos.d]# kubeadm reset
[reset] WARNING: changes made to this host by ‘kubeadm init’ or ‘kubeadm join’ will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in “/var/lib/kubelet”
[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[root@server3 yum.repos.d]# rm -rf /var/lib/cni

[root@server3 yum.repos.d]# ifconfig flannel.1 down && ip link delete flannel.1
[root@server3 yum.repos.d]#
[root@server3 yum.repos.d]#
[root@server3 yum.repos.d]# ifconfig cni0 down && ip link delete cni0
cni0: ERROR while getting interface flags: No such device

再来一遍,现在对了

[root@server3 yum.repos.d]# kubeadm init –apiserver-advertise-address 176.204.66.113 –pod-network-cidr=10.244.0.0/16
I0521 16:46:56.439596 15220 version.go:236] remote version is much newer: v1.14.2; falling back to: stable-1.12
[init] using Kubernetes version: v1.12.8
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[preflight] Activating the kubelet service
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [server3 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [server3 localhost] and IPs [176.204.66.103 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [server3 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 176.204.66.103]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in “/etc/kubernetes/pki”
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf”
[controlplane] wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 25.503495 seconds
[uploadconfig] storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.12” in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node server3 as master by adding the label “node-role.kubernetes.io/master=””
[markmaster] Marking the node server3 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “server3” as an annotation
[bootstraptoken] using token: au85dy.56wx3mc5mqyxsam9
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 176.204.66.103:6443 –token au85dy.56wx3mc5mqyxsam9 –discovery-token-ca-cert-hash sha256:43362c8c646283747d22a2b053cd3eff4f2753f0dc494f8aab75435b405abc20

[root@server3 yum.repos.d]#

此条目发表在kubernetes分类目录,贴了标签。将固定链接加入收藏夹。

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注