基于google仓库,使用kubeadm创建k8s集群

1、配置基础环境,安装docker

[root@localhost ~]

# yum install -y docker
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 13 kB 00:00:00

  • base: mirrors.xtom.com

  • epel: mirror.seas.harvard.edu

  • extras: mirrors.sonic.net

  • updates: mirrors.sonic.net

    base | 3.6 kB 00:00:00

    epel | 4.7 kB 00:00:00

    extras | 3.4 kB 00:00:00

    updates | 3.4 kB 00:00:00

    epel/x86_64/primary_db FAILED ] 261 kB/s | 1.2 MB 00:00:45 ETA

    http://reflector.westga.edu/repos/Fedora-EPEL/7/x86_64/repodata/2d8887f8e5e4cf6ea471191508205ef09e9fc593d7bd802c8d1d477907155a7c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 – Not Found

    Trying other mirror.

    To address this issue please refer to the below wiki article

https://wiki.centos.org/yum-errors

If above article doesn’t help to resolve this issue please use https://bugs.centos.org/.

(1/3): epel/x86_64/updateinfo | 998 kB 00:00:03
(2/3): updates/7/x86_64/primary_db | 5.0 MB 00:00:03
(3/3): epel/x86_64/primary_db | 6.7 MB 00:00:04
Resolving Dependencies
–> Running transaction check
—> Package docker.x86_64 2:1.13.1-96.gitb2f74b2.el7.centos will be installed
–> Processing Dependency: docker-common = 2:1.13.1-96.gitb2f74b2.el7.centos for package: 2:docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Processing Dependency: docker-client = 2:1.13.1-96.gitb2f74b2.el7.centos for package: 2:docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Running transaction check
—> Package docker-client.x86_64 2:1.13.1-96.gitb2f74b2.el7.centos will be installed
—> Package docker-common.x86_64 2:1.13.1-96.gitb2f74b2.el7.centos will be installed
–> Processing Dependency: skopeo-containers >= 1:0.1.26-2 for package: 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Processing Dependency: oci-umount >= 2:2.3.3-3 for package: 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Processing Dependency: oci-systemd-hook >= 1:0.1.4-9 for package: 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Processing Dependency: oci-register-machine >= 1:0-5.13 for package: 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Processing Dependency: container-storage-setup >= 0.9.0-1 for package: 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Processing Dependency: container-selinux >= 2:2.51-1 for package: 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Processing Dependency: atomic-registries for package: 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Running transaction check
—> Package atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos will be installed
–> Processing Dependency: python-yaml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
–> Processing Dependency: python-pytoml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
—> Package container-selinux.noarch 2:2.95-2.el7_6 will be installed
–> Processing Dependency: policycoreutils-python for package: 2:container-selinux-2.95-2.el7_6.noarch
—> Package container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7 will be installed
—> Package containers-common.x86_64 1:0.1.35-2.git404c5bd.el7.centos will be installed
—> Package oci-register-machine.x86_64 1:0-6.git2b44233.el7 will be installed
—> Package oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 will be installed
—> Package oci-umount.x86_64 2:2.3.4-2.git87f9237.el7 will be installed
–> Running transaction check
—> Package PyYAML.x86_64 0:3.10-11.el7 will be installed
–> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-11.el7.x86_64
—> Package policycoreutils-python.x86_64 0:2.5-29.el7_6.1 will be installed
–> Processing Dependency: setools-libs >= 3.3.8-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: libsemanage-python >= 2.5-14 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: audit-libs-python >= 2.1.3-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: python-IPy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: libqpol.so.1(VERS_1.4)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: libqpol.so.1(VERS_1.2)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: libapol.so.4(VERS_4.0)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: checkpolicy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: libqpol.so.1()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: libapol.so.4()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
—> Package python-pytoml.noarch 0:0.1.14-1.git7dea353.el7 will be installed
–> Running transaction check
—> Package audit-libs-python.x86_64 0:2.8.4-4.el7 will be installed
—> Package checkpolicy.x86_64 0:2.5-8.el7 will be installed
—> Package libsemanage-python.x86_64 0:2.5-14.el7 will be installed
—> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed
—> Package python-IPy.noarch 0:0.75-6.el7 will be installed
—> Package setools-libs.x86_64 0:3.3.8-4.el7 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================================

Package Arch Version Repository Size

Installing:
docker x86_64 2:1.13.1-96.gitb2f74b2.el7.centos extras 18 M
Installing for dependencies:
PyYAML x86_64 3.10-11.el7 base 153 k
atomic-registries x86_64 1:1.22.1-26.gitb507039.el7.centos extras 35 k
audit-libs-python x86_64 2.8.4-4.el7 base 76 k
checkpolicy x86_64 2.5-8.el7 base 295 k
container-selinux noarch 2:2.95-2.el7_6 extras 39 k
container-storage-setup noarch 0.11.0-2.git5eaf76c.el7 extras 35 k
containers-common x86_64 1:0.1.35-2.git404c5bd.el7.centos extras 21 k
docker-client x86_64 2:1.13.1-96.gitb2f74b2.el7.centos extras 3.9 M
docker-common x86_64 2:1.13.1-96.gitb2f74b2.el7.centos extras 96 k
libsemanage-python x86_64 2.5-14.el7 base 113 k
libyaml x86_64 0.1.4-11.el7_0 base 55 k
oci-register-machine x86_64 1:0-6.git2b44233.el7 extras 1.1 M
oci-systemd-hook x86_64 1:0.1.18-3.git8787307.el7_6 extras 34 k
oci-umount x86_64 2:2.3.4-2.git87f9237.el7 extras 32 k
policycoreutils-python x86_64 2.5-29.el7_6.1 updates 456 k
python-IPy noarch 0.75-6.el7 base 32 k
python-pytoml noarch 0.1.14-1.git7dea353.el7 extras 18 k
setools-libs x86_64 3.3.8-4.el7 base 620 k

Transaction Summary

Install 1 Package (+18 Dependent packages)

Total download size: 25 M
Installed size: 87 M
Downloading packages:
(1/19): audit-libs-python-2.8.4-4.el7.x86_64.rpm | 76 kB 00:00:01
(2/19): atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64.rpm | 35 kB 00:00:01
(3/19): container-selinux-2.95-2.el7_6.noarch.rpm | 39 kB 00:00:01
(4/19): PyYAML-3.10-11.el7.x86_64.rpm | 153 kB 00:00:01
(5/19): containers-common-0.1.35-2.git404c5bd.el7.centos.x86_64.rpm | 21 kB 00:00:00
(6/19): docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64.rpm | 96 kB 00:00:00
(7/19): libsemanage-python-2.5-14.el7.x86_64.rpm | 113 kB 00:00:01
(8/19): docker-client-1.13.1-96.gitb2f74b2.el7.centos.x86_64.rpm | 3.9 MB 00:00:02
(9/19): libyaml-0.1.4-11.el7_0.x86_64.rpm | 55 kB 00:00:00
(10/19): oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64.rpm | 34 kB 00:00:00
(11/19): oci-umount-2.3.4-2.git87f9237.el7.x86_64.rpm | 32 kB 00:00:00
(12/19): policycoreutils-python-2.5-29.el7_6.1.x86_64.rpm | 456 kB 00:00:00
(13/19): docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64.rpm | 18 MB 00:00:04
(14/19): python-IPy-0.75-6.el7.noarch.rpm | 32 kB 00:00:00
(15/19): python-pytoml-0.1.14-1.git7dea353.el7.noarch.rpm | 18 kB 00:00:00
(16/19): setools-libs-3.3.8-4.el7.x86_64.rpm | 620 kB 00:00:00
(17/19): oci-register-machine-0-6.git2b44233.el7.x86_64.rpm | 1.1 MB 00:00:03
container-storage-setup-0.11.0 FAILED
http://centos.sonn.com/7.6.1810/extras/x86_64/Packages/container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch.rpm: [Errno 14] curl#6 – “Could not resolve host: centos.sonn.com; Unknown error”
Trying other mirror.
(18/19): container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch.rpm | 35 kB 00:00:00

(19/19): checkpolicy-2.5-8.el7.x86_64.rpm | 295 kB 00:00:11

Total 2.1 MB/s | 25 MB 00:00:11
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64 1/19
Installing : setools-libs-3.3.8-4.el7.x86_64 2/19
Installing : 1:containers-common-0.1.35-2.git404c5bd.el7.centos.x86_64 3/19
Installing : checkpolicy-2.5-8.el7.x86_64 4/19
Installing : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch 5/19
Installing : python-IPy-0.75-6.el7.noarch 6/19
Installing : 1:oci-register-machine-0-6.git2b44233.el7.x86_64 7/19
Installing : libsemanage-python-2.5-14.el7.x86_64 8/19
Installing : libyaml-0.1.4-11.el7_0.x86_64 9/19
Installing : PyYAML-3.10-11.el7.x86_64 10/19
Installing : python-pytoml-0.1.14-1.git7dea353.el7.noarch 11/19
Installing : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64 12/19
Installing : audit-libs-python-2.8.4-4.el7.x86_64 13/19
Installing : policycoreutils-python-2.5-29.el7_6.1.x86_64 14/19
Installing : 2:container-selinux-2.95-2.el7_6.noarch 15/19
setsebool: SELinux is disabled.
Installing : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64 16/19
Installing : 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64 17/19
Installing : 2:docker-client-1.13.1-96.gitb2f74b2.el7.centos.x86_64 18/19
Installing : 2:docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64 19/19
Verifying : 2:container-selinux-2.95-2.el7_6.noarch 1/19
Verifying : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64 2/19
Verifying : audit-libs-python-2.8.4-4.el7.x86_64 3/19
Verifying : python-pytoml-0.1.14-1.git7dea353.el7.noarch 4/19
Verifying : 2:docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64 5/19
Verifying : libyaml-0.1.4-11.el7_0.x86_64 6/19
Verifying : 2:docker-client-1.13.1-96.gitb2f74b2.el7.centos.x86_64 7/19
Verifying : libsemanage-python-2.5-14.el7.x86_64 8/19
Verifying : 1:oci-register-machine-0-6.git2b44233.el7.x86_64 9/19
Verifying : python-IPy-0.75-6.el7.noarch 10/19
Verifying : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch 11/19
Verifying : checkpolicy-2.5-8.el7.x86_64 12/19
Verifying : 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64 13/19
Verifying : 1:containers-common-0.1.35-2.git404c5bd.el7.centos.x86_64 14/19
Verifying : policycoreutils-python-2.5-29.el7_6.1.x86_64 15/19
Verifying : PyYAML-3.10-11.el7.x86_64 16/19
Verifying : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64 17/19
Verifying : setools-libs-3.3.8-4.el7.x86_64 18/19
Verifying : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64 19/19

Installed:
docker.x86_64 2:1.13.1-96.gitb2f74b2.el7.centos

Dependency Installed:
PyYAML.x86_64 0:3.10-11.el7 atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos
audit-libs-python.x86_64 0:2.8.4-4.el7 checkpolicy.x86_64 0:2.5-8.el7
container-selinux.noarch 2:2.95-2.el7_6 container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7
containers-common.x86_64 1:0.1.35-2.git404c5bd.el7.centos docker-client.x86_64 2:1.13.1-96.gitb2f74b2.el7.centos
docker-common.x86_64 2:1.13.1-96.gitb2f74b2.el7.centos libsemanage-python.x86_64 0:2.5-14.el7
libyaml.x86_64 0:0.1.4-11.el7_0 oci-register-machine.x86_64 1:0-6.git2b44233.el7
oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 oci-umount.x86_64 2:2.3.4-2.git87f9237.el7
policycoreutils-python.x86_64 0:2.5-29.el7_6.1 python-IPy.noarch 0:0.75-6.el7
python-pytoml.noarch 0:0.1.14-1.git7dea353.el7 setools-libs.x86_64 0:3.3.8-4.el7

Complete!

关闭swap分区,安装bridge

[root@localhost ~]

# swapoff -a

[root@localhost ~]

# yum install bridge-utils -y
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirrors.xtom.com

  • epel: mirror.umd.edu

  • extras: mirror.sjc02.svwh.net

  • updates: mirrors.ocf.berkeley.edu

    Package bridge-utils-1.5-9.el7.x86_64 already installed and latest version

    Nothing to do

配置k8s相关内核设置

[root@localhost ~]

# cat < /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

[root@localhost ~]

# sysctl –system

  • Applying /usr/lib/sysctl.d/00-system.conf …

  • Applying /usr/lib/sysctl.d/10-default-yama-scope.conf …

    kernel.yama.ptrace_scope = 0

  • Applying /usr/lib/sysctl.d/50-default.conf …

    kernel.sysrq = 16

    kernel.core_uses_pid = 1

    net.ipv4.conf.default.rp_filter = 1

    net.ipv4.conf.all.rp_filter = 1

    net.ipv4.conf.default.accept_source_route = 0

    net.ipv4.conf.all.accept_source_route = 0

    net.ipv4.conf.default.promote_secondaries = 1

    net.ipv4.conf.all.promote_secondaries = 1

    fs.protected_hardlinks = 1

    fs.protected_symlinks = 1

  • Applying /usr/lib/sysctl.d/60-libvirtd.conf …

    fs.aio-max-nr = 1048576

  • Applying /usr/lib/sysctl.d/99-docker.conf …

    fs.may_detach_mounts = 1

  • Applying /etc/sysctl.d/99-sysctl.conf …

  • Applying /etc/sysctl.d/k8s.conf …

  • Applying /etc/sysctl.conf …

[root@localhost ~]

# lsmod | grep br_netfilter

[root@localhost ~]

# modprobe br_netfilter

[root@localhost ~]

# lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 151336 1 br_netfilter

2、安装kubeadm相关工具,由于没有源,报错

[root@localhost ~]

# yum install -y kubelet kubeadm kubectl
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirrors.xtom.com

  • epel: mirror.sjc02.svwh.net

  • extras: mirror.sjc02.svwh.net

  • updates: mirrors.ocf.berkeley.edu

    No package kubelet available.

    No package kubeadm available.

    No package kubectl available.

    Error: Nothing to do

添加google的源,再来安装

[root@localhost ~]

# cat < /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

[root@localhost ~]

# yum repolist
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirrors.xtom.com

  • epel: mirrors.rit.edu

  • extras: mirror.sjc02.svwh.net

  • updates: mirrors.sonic.net

    kubernetes/signature | 454 B 00:00:00

    Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg

    Importing GPG key 0xA7317B0F:

    Userid : “Google Cloud Packages Automatic Signing Key [email protected]

    Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f

    From : https://packages.cloud.google.com/yum/doc/yum-key.gpg

    Is this ok [y/N]: y

    Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

    kubernetes/signature | 1.4 kB 00:00:02 !!!

    kubernetes/primary | 49 kB 00:00:01

    kubernetes 351/351

    repo id repo name status

    base/7/x86_64 CentOS-7 – Base 10,019

    epel/x86_64 Extra Packages for Enterprise Linux 7 – x86_64 13,190

    extras/7/x86_64 CentOS-7 – Extras 413

    kubernetes Kubernetes 7+344

    updates/7/x86_64 CentOS-7 – Updates 1,945

    repolist: 25,574

安装kubeadm相关工具

[root@localhost ~]

# yum install -y kubelet kubeadm kubectl –disableexcludes=kubernetes
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirrors.xtom.com

  • epel: mirror.umd.edu

  • extras: mirror.sjc02.svwh.net

  • updates: mirrors.ocf.berkeley.edu

    Resolving Dependencies

    –> Running transaction check

    —> Package kubeadm.x86_64 0:1.14.2-0 will be installed

    –> Processing Dependency: kubernetes-cni >= 0.7.5 for package: kubeadm-1.14.2-0.x86_64

    –> Processing Dependency: cri-tools >= 1.11.0 for package: kubeadm-1.14.2-0.x86_64

    —> Package kubectl.x86_64 0:1.14.2-0 will be installed

    —> Package kubelet.x86_64 0:1.14.2-0 will be installed

    –> Processing Dependency: socat for package: kubelet-1.14.2-0.x86_64

    –> Processing Dependency: conntrack for package: kubelet-1.14.2-0.x86_64

    –> Running transaction check

    —> Package conntrack-tools.x86_64 0:1.4.4-4.el7 will be installed

    –> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64

    –> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64

    –> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64

    –> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64

    –> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64

    –> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64

    —> Package cri-tools.x86_64 0:1.12.0-0 will be installed

    —> Package kubernetes-cni.x86_64 0:0.7.5-0 will be installed

    —> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed

    –> Running transaction check

    —> Package libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 will be installed

    —> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 will be installed

    —> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed

    –> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================================

Package Arch Version Repository Size

Installing:
kubeadm x86_64 1.14.2-0 kubernetes 8.7 M
kubectl x86_64 1.14.2-0 kubernetes 9.5 M
kubelet x86_64 1.14.2-0 kubernetes 23 M
Installing for dependencies:
conntrack-tools x86_64 1.4.4-4.el7 base 186 k
cri-tools x86_64 1.12.0-0 kubernetes 4.2 M
kubernetes-cni x86_64 0.7.5-0 kubernetes 10 M
libnetfilter_cthelper x86_64 1.0.0-9.el7 base 18 k
libnetfilter_cttimeout x86_64 1.0.0-6.el7 base 18 k
libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k
socat x86_64 1.7.3.2-2.el7 base 290 k

Transaction Summary

Install 3 Packages (+7 Dependent packages)

Total download size: 56 M
Installed size: 256 M
Downloading packages:
(1/10): conntrack-tools-1.4.4-4.el7.x86_64.rpm | 186 kB 00:00:01
warning: /var/cache/yum/x86_64/7/kubernetes/packages/53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY
Public key for 53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm is not installed
(2/10): 53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm | 4.2 MB 00:00:04
(3/10): de639995840837d724cc5a4816733d5aef5a6bf384eaff22c786def53fb4e1d5-kubeadm-1.14.2-0.x86_64.rpm | 8.7 MB 00:00:05
(4/10): 7adc7890a14396a4ae88e7b8ed44c855c7d44dc3eefb98e4c729b99c2df6fa03-kubectl-1.14.2-0.x86_64.rpm | 9.5 MB 00:00:02
(5/10): libnetfilter_cttimeout-1.0.0-6.el7.x86_64.rpm | 18 kB 00:00:00
(6/10): libnetfilter_cthelper-1.0.0-9.el7.x86_64.rpm | 18 kB 00:00:01
(7/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm | 23 kB 00:00:01
(8/10): 548a0dcd865c16a50980420ddfa5fbccb8b59621179798e6dc905c9bf8af3b34-kubernetes-cni-0.7.5-0.x86_64.rpm | 10 MB 00:00:02
(9/10): socat-1.7.3.2-2.el7.x86_64.rpm | 290 kB 00:00:01

(10/10): 1a181064b472261b78b534b5a233a4c73d505673c02acbe01d95db819940006e-kubelet-1.14.2-0.x86_64.rpm | 23 MB 00:00:03

Total 5.8 MB/s | 56 MB 00:00:09
Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg
Importing GPG key 0xA7317B0F:
Userid : “Google Cloud Packages Automatic Signing Key [email protected]
Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Importing GPG key 0x3E1BA8D5:
Userid : “Google Cloud Packages RPM Signing Key [email protected]
Fingerprint: 3749 e1ba 95a8 6ce0 5454 6ed2 f09c 394c 3e1b a8d5
From : https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : socat-1.7.3.2-2.el7.x86_64 1/10
Installing : libnetfilter_cthelper-1.0.0-9.el7.x86_64 2/10
Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64 3/10
Installing : kubectl-1.14.2-0.x86_64 4/10
Installing : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 5/10
Installing : conntrack-tools-1.4.4-4.el7.x86_64 6/10
Installing : kubernetes-cni-0.7.5-0.x86_64 7/10
Installing : kubelet-1.14.2-0.x86_64 8/10
Installing : cri-tools-1.12.0-0.x86_64 9/10
Installing : kubeadm-1.14.2-0.x86_64 10/10
Verifying : cri-tools-1.12.0-0.x86_64 1/10
Verifying : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 2/10
Verifying : kubectl-1.14.2-0.x86_64 3/10
Verifying : libnetfilter_queue-1.0.2-2.el7_2.x86_64 4/10
Verifying : kubeadm-1.14.2-0.x86_64 5/10
Verifying : libnetfilter_cthelper-1.0.0-9.el7.x86_64 6/10
Verifying : kubelet-1.14.2-0.x86_64 7/10
Verifying : kubernetes-cni-0.7.5-0.x86_64 8/10
Verifying : socat-1.7.3.2-2.el7.x86_64 9/10
Verifying : conntrack-tools-1.4.4-4.el7.x86_64 10/10

Installed:
kubeadm.x86_64 0:1.14.2-0 kubectl.x86_64 0:1.14.2-0 kubelet.x86_64 0:1.14.2-0

Dependency Installed:
conntrack-tools.x86_64 0:1.4.4-4.el7 cri-tools.x86_64 0:1.12.0-0 kubernetes-cni.x86_64 0:0.7.5-0
libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
socat.x86_64 0:1.7.3.2-2.el7

Complete!

3、初始化master,可以看到,由于忘了启动docker,报错了

[root@localhost ~]

# kubeadm init –apiserver-advertise-address 192.168.0.205 –pod-network-cidr=10.244.0.0/16

[init]

Using Kubernetes version: v1.14.2

[preflight]

Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run ‘systemctl enable docker.service’

[preflight]

The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 3.10.0-957.12.1.el7.x86_64
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled (as module)
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: not set – Required for aufs.
CONFIG_BLK_DEV_DM: enabled (as module)
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
, error: exit status 1
[ERROR Service-Docker]: docker service is not active, please run ‘systemctl start docker.service’
[ERROR IsDockerSystemdCheck]: cannot execute ‘docker info’: exit status 1
[ERROR SystemVerification]: failed to get docker info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

[preflight]

If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...

启动docker,然后再试一次

[root@localhost ~]

# systemctl restart docker

[root@localhost ~]

# kubeadm init –apiserver-advertise-address 192.168.0.205 –pod-network-cidr=10.244.0.0/16

[init]

Using Kubernetes version: v1.14.2

[preflight]

Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run ‘systemctl enable docker.service’
[WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’

[preflight]

Pulling images required for setting up a Kubernetes cluster

[preflight]

This might take a minute or two, depending on the speed of your internet connection

[preflight]

You can also perform this action in beforehand using ‘kubeadm config images pull’

[kubelet-start]

Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet-start]

Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet-start]

Activating the kubelet service

[certs]

Using certificateDir folder “/etc/kubernetes/pki”

[certs]

Generating “etcd/ca” certificate and key

[certs]

Generating “etcd/server” certificate and key

[certs]

etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.0.205 127.0.0.1 ::1]

[certs]

Generating “etcd/healthcheck-client” certificate and key

[certs]

Generating “apiserver-etcd-client” certificate and key

[certs]

Generating “etcd/peer” certificate and key

[certs]

etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.0.205 127.0.0.1 ::1]

[certs]

Generating “ca” certificate and key

[certs]

Generating “apiserver” certificate and key

[certs]

apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.205]

[certs]

Generating “apiserver-kubelet-client” certificate and key

[certs]

Generating “front-proxy-ca” certificate and key

[certs]

Generating “front-proxy-client” certificate and key

[certs]

Generating “sa” key and public key

[kubeconfig]

Using kubeconfig folder “/etc/kubernetes”

[kubeconfig]

Writing “admin.conf” kubeconfig file

[kubeconfig]

Writing “kubelet.conf” kubeconfig file

[kubeconfig]

Writing “controller-manager.conf” kubeconfig file

[kubeconfig]

Writing “scheduler.conf” kubeconfig file

[control-plane]

Using manifest folder “/etc/kubernetes/manifests”

[control-plane]

Creating static Pod manifest for “kube-apiserver”

[control-plane]

Creating static Pod manifest for “kube-controller-manager”

[control-plane]

Creating static Pod manifest for “kube-scheduler”

[etcd]

Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”

[wait-control-plane]

Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s

[apiclient]

All control plane components are healthy after 16.005607 seconds

[upload-config]

storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace

[kubelet]

Creating a ConfigMap “kubelet-config-1.14” in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs]

Skipping phase. Please see –experimental-upload-certs

[mark-control-plane]

Marking the node localhost.localdomain as control-plane by adding the label “node-role.kubernetes.io/master=””

[mark-control-plane]

Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[bootstrap-token]

Using token: edidaa.umann314693vc46u

[bootstrap-token]

Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token]

configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token]

configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token]

configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token]

creating the “cluster-info” ConfigMap in the “kube-public” namespace

[addons]

Applied essential addon: CoreDNS

[addons]

Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.205:6443 –token edidaa.umann314693vc46u \
–discovery-token-ca-cert-hash sha256:d3cec87bf46c35cdd379e8a23e55716a7b9f5520207519b2f47db6ff638ebf01
安装成功

4、配置kubectl
按照提示,我们应当使用普通用户来操作kubectl命令,现在添加一个fencatn用户,密码设置为123456

[root@localhost ~]

# useradd fencatn

[root@localhost ~]

# echo ‘123456’ | passwd –stdin fencatn
Changing password for user fencatn.
passwd: all authentication tokens updated successfully.
切换到fencatn用户开始操作

[root@localhost ~]

# su – fencatn

[fencatn@localhost ~]

$ echo $HOME
/home/fencatn

[fencatn@localhost ~]

$ mkdir -p $HOME/.kube
忘了配置fencatn的sudo权限,只好先回去配了

[fencatn@localhost ~]

$ exit
logout
You have new mail in /var/spool/mail/root

[root@localhost ~]

# vim /etc/sudoers

[root@localhost ~]

# su – fencatn
Last login: Fri May 17 09:23:09 EDT 2019 on pts/0

[fencatn@localhost ~]

$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[sudo]

password for fencatn:

[fencatn@localhost ~]

$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

[fencatn@localhost ~]

$ ll /home/fencatn/.kube/config
-rw——- 1 fencatn fencatn 5453 May 17 09:26 /home/fencatn/.kube/config

为了方便快捷使用,可以启用kubectl的命令补全功能(其实就是把kubectl 的所有命令调出来,然后source一下)

[fencatn@localhost ~]

$ tail -1 .bashrc
source <(kubectl completion bash)

5、开始使用之前,配置主机名,这个确实是一开始就忘了,抱歉
我的是4节点,对应是k8s-master k8s-node1 k8s-node2 k8s-node3
192.168.0.205 k8s-master
192.168.0.206 k8s-node1
192.168.0.207 k8s-node2
192.168.0.208 k8s-node3

[root@localhost ~]

# hostnamectl set-hostname k8s-mster

[root@localhost ~]

# vim /etc/hosts

[root@localhost ~]

# exit
logout
Connection closing…Socket close.

Connection closed by foreign host.

Disconnected from remote host(k8s-1) at 21:59:21.

Type `help’ to learn how to use Xshell prompt.
[C:~]$ ssh [email protected]

Connecting to 192.168.0.205:22…
Connection established.
To escape to local shell, press ‘Ctrl+Alt+]’.

Last login: Fri May 17 08:45:00 2019 from 192.168.0.15

[root@k8s-mster ~]

# ping k8s-node3
PING k8s-node3 (192.168.0.208) 56(84) bytes of data.
64 bytes from k8s-node3 (192.168.0.208): icmp_seq=1 ttl=64 time=1.96 ms
^C
— k8s-node3 ping statistics —
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.960/1.960/1.960/0.000 ms

其余3个节点自己改,我懒得复制了

6、安装pod网络,这里我们使用的是flannel

[fencatn@k8s-mster ~]

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

我估计你们有的人上不去这个配置文件,我复制下来你们参考下
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
– configMap
– secret
– emptyDir
– hostPath
allowedHostPaths:
– pathPrefix: “/etc/cni/net.d”
– pathPrefix: “/etc/kube-flannel”
– pathPrefix: “/run/flannel”
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: [‘NET_ADMIN’]
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:

  • min: 0

    max: 65535

    # SELinux

    seLinux:

    # SELinux is unsed in CaaSP

rule: ‘RunAsAny’

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:

  • apiGroups: [‘extensions’]

    resources: [‘podsecuritypolicies’]

    verbs: [‘use’]

    resourceNames: [‘psp.flannel.unprivileged’]

  • apiGroups:

    • “”

      resources:

    • pods

      verbs:

    • get

  • apiGroups:

    • “”

      resources:

    • nodes

      verbs:

    • list

    • watch

  • apiGroups:

    • “”

      resources:

    • nodes/status

      verbs:

– patch

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:

  • kind: ServiceAccount

    name: flannel

namespace: kube-system

apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel

namespace: kube-system

kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
“name”: “cbr0”,
“plugins”: [
{
“type”: “flannel”,
“delegate”: {
“hairpinMode”: true,
“isDefaultGateway”: true
}
},
{
“type”: “portmap”,
“capabilities”: {
“portMappings”: true
}
}
]
}
net-conf.json: |
{
“Network”: “10.244.0.0/16”,
“Backend”: {
“Type”: “vxlan”
}

}

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
– operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
– name: install-cni
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
– cp
args:
– -f
– /etc/kube-flannel/cni-conf.json
– /etc/cni/net.d/10-flannel.conflist
volumeMounts:
– name: cni
mountPath: /etc/cni/net.d
– name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
– name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
– /opt/bin/flanneld
args:
– –ip-masq
– –kube-subnet-mgr
resources:
requests:
cpu: “100m”
memory: “50Mi”
limits:
cpu: “100m”
memory: “50Mi”
securityContext:
privileged: false
capabilities:
add: [“NET_ADMIN”]
env:
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
– name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
– name: run
mountPath: /run/flannel
– name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
– name: run
hostPath:
path: /run/flannel
– name: cni
hostPath:
path: /etc/cni/net.d
– name: flannel-cfg
configMap:

name: kube-flannel-cfg

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm64
tolerations:
– operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
– name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
– cp
args:
– -f
– /etc/kube-flannel/cni-conf.json
– /etc/cni/net.d/10-flannel.conflist
volumeMounts:
– name: cni
mountPath: /etc/cni/net.d
– name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
– name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
– /opt/bin/flanneld
args:
– –ip-masq
– –kube-subnet-mgr
resources:
requests:
cpu: “100m”
memory: “50Mi”
limits:
cpu: “100m”
memory: “50Mi”
securityContext:
privileged: false
capabilities:
add: [“NET_ADMIN”]
env:
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
– name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
– name: run
mountPath: /run/flannel
– name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
– name: run
hostPath:
path: /run/flannel
– name: cni
hostPath:
path: /etc/cni/net.d
– name: flannel-cfg
configMap:

name: kube-flannel-cfg

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm
tolerations:
– operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
– name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm
command:
– cp
args:
– -f
– /etc/kube-flannel/cni-conf.json
– /etc/cni/net.d/10-flannel.conflist
volumeMounts:
– name: cni
mountPath: /etc/cni/net.d
– name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
– name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm
command:
– /opt/bin/flanneld
args:
– –ip-masq
– –kube-subnet-mgr
resources:
requests:
cpu: “100m”
memory: “50Mi”
limits:
cpu: “100m”
memory: “50Mi”
securityContext:
privileged: false
capabilities:
add: [“NET_ADMIN”]
env:
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
– name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
– name: run
mountPath: /run/flannel
– name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
– name: run
hostPath:
path: /run/flannel
– name: cni
hostPath:
path: /etc/cni/net.d
– name: flannel-cfg
configMap:

name: kube-flannel-cfg

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: ppc64le
tolerations:
– operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
– name: install-cni
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
– cp
args:
– -f
– /etc/kube-flannel/cni-conf.json
– /etc/cni/net.d/10-flannel.conflist
volumeMounts:
– name: cni
mountPath: /etc/cni/net.d
– name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
– name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
– /opt/bin/flanneld
args:
– –ip-masq
– –kube-subnet-mgr
resources:
requests:
cpu: “100m”
memory: “50Mi”
limits:
cpu: “100m”
memory: “50Mi”
securityContext:
privileged: false
capabilities:
add: [“NET_ADMIN”]
env:
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
– name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
– name: run
mountPath: /run/flannel
– name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
– name: run
hostPath:
path: /run/flannel
– name: cni
hostPath:
path: /etc/cni/net.d
– name: flannel-cfg
configMap:

name: kube-flannel-cfg

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: s390x
tolerations:
– operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
– name: install-cni
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
– cp
args:
– -f
– /etc/kube-flannel/cni-conf.json
– /etc/cni/net.d/10-flannel.conflist
volumeMounts:
– name: cni
mountPath: /etc/cni/net.d
– name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
– name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
– /opt/bin/flanneld
args:
– –ip-masq
– –kube-subnet-mgr
resources:
requests:
cpu: “100m”
memory: “50Mi”
limits:
cpu: “100m”
memory: “50Mi”
securityContext:
privileged: false
capabilities:
add: [“NET_ADMIN”]
env:
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
– name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
– name: run
mountPath: /run/flannel
– name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
– name: run
hostPath:
path: /run/flannel
– name: cni
hostPath:
path: /etc/cni/net.d
– name: flannel-cfg
configMap:
name: kube-flannel-cfg

7、添加node节点
我以其中一个节点为例,其它节点是一样的,其实添加命令在初始化master的时候,最后那个地方已经提示了,直接复制粘贴就好

[root@k8s-node1 ~]

# kubeadm join 192.168.0.205:6443 –token edidaa.umann314693vc46u \

--discovery-token-ca-cert-hash sha256:d3cec87bf46c35cdd379e8a23e55716a7b9f5520207519b2f47db6ff638ebf01

[preflight]

Running pre-flight checks
[WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’

[preflight]

Reading configuration from the cluster…

[preflight]

FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’

[kubelet-start]

Downloading configuration for the kubelet from the “kubelet-config-1.14” ConfigMap in the kube-system namespace

[kubelet-start]

Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet-start]

Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet-start]

Activating the kubelet service

[kubelet-start]

Waiting for the kubelet to perform the TLS Bootstrap…

This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.

  • The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

[root@k8s-node1 ~]

#

8、验证安装

[fencatn@k8s-mster ~]

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 NotReady 75s v1.14.2
k8s-node2 NotReady 46s v1.14.2
k8s-node3 NotReady 13s v1.14.2
localhost.localdomain Ready master 75m v1.14.2

[fencatn@k8s-mster ~]

$ kubectl get pod –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-fb8b8dccf-997w2 1/1 Running 0 76m
kube-system coredns-fb8b8dccf-qdqx9 1/1 Running 0 76m
kube-system etcd-localhost.localdomain 1/1 Running 0 76m
kube-system kube-apiserver-localhost.localdomain 1/1 Running 0 75m
kube-system kube-controller-manager-localhost.localdomain 1/1 Running 0 75m
kube-system kube-flannel-ds-amd64-4gp9x 1/1 Running 2 3m9s
kube-system kube-flannel-ds-amd64-spnht 1/1 Running 0 23m
kube-system kube-flannel-ds-amd64-vgssj 1/1 Running 0 2m40s
kube-system kube-flannel-ds-amd64-zx72c 1/1 Running 3 2m7s
kube-system kube-proxy-jgqsg 1/1 Running 0 3m9s
kube-system kube-proxy-nfv49 1/1 Running 0 76m
kube-system kube-proxy-nvwdx 1/1 Running 0 2m40s
kube-system kube-proxy-ptqc5 1/1 Running 0 2m7s
kube-system kube-scheduler-localhost.localdomain 1/1 Running 0 76m

[fencatn@k8s-mster ~]

$

如果是pending ContainerCreating ImagePullBackOff 都说明pod没有就绪,Running才是正常的,如果有问题,describe pod 来具体查看一下,注意,要使用-n来指定namespace

[fencatn@k8s-mster ~]

$ kubectl describe pod kube-controller-manager-localhost.localdomain -n kube-system
Name: kube-controller-manager-localhost.localdomain
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: localhost.localdomain/192.168.0.205
Start Time: Fri, 17 May 2019 09:09:40 -0400
Labels: component=kube-controller-manager
tier=control-plane
Annotations: kubernetes.io/config.hash: 59eb3b921f0bf7278134e458380d3d58
kubernetes.io/config.mirror: 59eb3b921f0bf7278134e458380d3d58
kubernetes.io/config.seen: 2019-05-17T09:09:40.01042606-04:00
kubernetes.io/config.source: file
Status: Running
IP: 192.168.0.205
Containers:
kube-controller-manager:
Container ID: docker://a1ac4f5833a9760674213f47edc01a0836606b45406fac415b5b325493a6ed18
Image: k8s.gcr.io/kube-controller-manager:v1.14.2
Image ID: docker-pullable://k8s.gcr.io/kube-controller-manager@sha256:51a382e90acd9d11d5571850312ad4b268db8b28b2868516dfda19a6933a095c
Port:
Host Port:
Command:
kube-controller-manager
–allocate-node-cidrs=true
–authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
–authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
–bind-address=127.0.0.1
–client-ca-file=/etc/kubernetes/pki/ca.crt
–cluster-cidr=10.244.0.0/16
–cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
–cluster-signing-key-file=/etc/kubernetes/pki/ca.key
–controllers=*,bootstrapsigner,tokencleaner
–kubeconfig=/etc/kubernetes/controller-manager.conf
–leader-elect=true
–node-cidr-mask-size=24
–requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
–root-ca-file=/etc/kubernetes/pki/ca.crt
–service-account-private-key-file=/etc/kubernetes/pki/sa.key
–use-service-account-credentials=true
State: Running
Started: Fri, 17 May 2019 09:09:41 -0400
Ready: True
Restart Count: 0
Requests:
cpu: 200m
Liveness: http-get http://127.0.0.1:10252/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment:
Mounts:
/etc/kubernetes/controller-manager.conf from kubeconfig (ro)
/etc/kubernetes/pki from k8s-certs (ro)
/etc/pki from etc-pki (ro)
/etc/ssl/certs from ca-certs (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
ca-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
HostPathType: DirectoryOrCreate
etc-pki:
Type: HostPath (bare host directory volume)
Path: /etc/pki
HostPathType: DirectoryOrCreate
k8s-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
HostPathType: DirectoryOrCreate
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/controller-manager.conf
HostPathType: FileOrCreate
QoS Class: Burstable
Node-Selectors:
Tolerations: :NoExecute
Events:

至此,部署完毕,下面介绍详细使用。

此条目发表在Docker分类目录,贴了标签。将固定链接加入收藏夹。

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注