openstack调整云主机报错:Openstack resize instance no valid host was found

调整云主机时,直接报错HTTP400,“No valid host was found. No valid host found for resize (HTTP 400)”

然后我搜了一下,有位小哥说了答案:因为nova配置文件默认不允许在相同的节点调度,允许调度后就正常了,也就是

allow_resize_to_same_host to true in /etc/nova/nova.conf 

以下是小哥的链接:

Openstack resize instance no valid host was found

Today I had to resize an instance in my OpenStack and noticed that I couldn’t do that because of an error: “No valid host was found. No valid host found for resize (HTTP 400)“. Soon enough I understood OpenStack was trying to resize the instance by using another host, however my setup is a all-in-one node.

The solution
is rather simple: set the property allow_resize_to_same_host to true in /etc/nova/nova.conf . After this restart nova-compute and nova-api by doing

# systemctl restart openstack-nova-compute
# systemctl restart openstack-nova-api
Also be careful, if you set up to save the root disk inside of Cinder (boot and create volume). I still haven’t found a way to resize using Nova. Inexplicably the request times out and the instance enters the state VM_ERROR. The workaround I used is:

Delete the instance while keeping the volume that contains the root.
Create a new instance booting from the root volume of the precedent.
Re-assign floating IP.
EDIT: After a few tweaks and a reboot I can now resize instances with root disk inside Cinder. It was probably due to a malfunction in the cinder-scheduler component.

发表在 OpenStack | 标签为 | 留下评论

因为环境不干净导致的kubeadm部署失败,及解决办法

先说解决办法,转自:

https://github.com/kubernetes/kubeadm/issues/1092

commented on Sep 12, 2018

$ kubeadm reset
$ ifconfig cni0 down && ip link delete cni0
$ ifconfig flannel.1 down && ip link delete flannel.1
$ rm -rf /var/lib/cni/

good luck!

正常部署时一堆报错:

[root@server3 yum.repos.d]# kubeadm init –apiserver-advertise-address 176.204.66.113 –pod-network-cidr=10.244.0.0/16
I0521 16:38:11.115087 12754 version.go:236] remote version is much newer: v1.14.2; falling back to: stable-1.12
[init] using Kubernetes version: v1.12.8
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[preflight] Activating the kubelet service
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [server3 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [server3 localhost] and IPs [176.204.66.103 127.0.0.1 ::1]
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [server3 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 176.204.66.103]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in “/etc/kubernetes/pki”
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf”
[controlplane] wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 25.502697 seconds
[uploadconfig] storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.12” in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node server3 as master by adding the label “node-role.kubernetes.io/master=””
[markmaster] Marking the node server3 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition
[root@server3 yum.repos.d]# kubeadm init –apiserver-advertise-address 176.204.66.113 –pod-network-cidr=10.244.0.0/16
I0521 16:41:09.835342 14174 version.go:236] remote version is much newer: v1.14.2; falling back to: stable-1.12
[init] using Kubernetes version: v1.12.8
[preflight] running pre-flight checks
[preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10251]: Port 10251 is in use
[ERROR Port-10252]: Port 10252 is in use
[ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR DirAvailable–var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`

查报错也没用,应该是差错了方向

[root@server3 yum.repos.d]# journalctl -xe
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.447307 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.547518 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:39 server3 kubelet[12986]: I0521 16:41:39.604606 12986 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
May 21 16:41:39 server3 kubelet[12986]: I0521 16:41:39.608043 12986 kubelet_node_status.go:72] Attempting to register node server3
May 21 16:41:39 server3 dockerd-current[30957]: E0521 08:41:39.608959 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.609473 12986 kubelet_node_status.go:94] Unable to register node “server3” with API server: Unauthorized
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.647690 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.747912 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:39 server3 dockerd-current[30957]: E0521 08:41:39.803911 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.804379 12986 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.848169 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:39 server3 kubelet[12986]: E0521 16:41:39.948407 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 dockerd-current[30957]: E0521 08:41:40.003894 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.004335 12986 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Unauthorized
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.048673 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.148920 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 dockerd-current[30957]: E0521 08:41:40.203930 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.204444 12986 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorized
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.249130 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.349343 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 dockerd-current[30957]: E0521 08:41:40.403983 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.404516 12986 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Unauthorized
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.449487 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.549709 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 dockerd-current[30957]: E0521 08:41:40.603871 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.604379 12986 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.649934 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.750133 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 dockerd-current[30957]: E0521 08:41:40.805437 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.805909 12986 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.850402 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:40 server3 kubelet[12986]: E0521 16:41:40.950626 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:41 server3 dockerd-current[30957]: E0521 08:41:41.005300 1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority, x509: certi
May 21 16:41:41 server3 kubelet[12986]: E0521 16:41:41.005753 12986 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Unauthorized
May 21 16:41:41 server3 kubelet[12986]: E0521 16:41:41.050825 12986 kubelet.go:2244] node “server3” not found
May 21 16:41:41 server3 polkitd[5794]: Registered Authentication Agent for unix-process:14438:1872848 (system bus name :1.280 [/usr/bin/pkttyagent –notify-fd 5 –fallback], object path /org/freedesktop/PolicyKi
May 21 16:41:41 server3 systemd[1]: Stopping kubelet: The Kubernetes Node Agent…
— Subject: Unit kubelet.service has begun shutting down
— Defined-By: systemd
— Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

— Unit kubelet.service has begun shutting down.
May 21 16:41:41 server3 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
— Subject: Unit kubelet.service has finished shutting down
— Defined-By: systemd
— Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

— Unit kubelet.service has finished shutting down.
May 21 16:41:41 server3 polkitd[5794]: Unregistered Authentication Agent for unix-process:14438:1872848 (system bus name :1.280, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (

网上的资料更是不知所云,最后google去搜了一下,终于找到解决方案,其实也很简单,就是kubeadm本来就可以清干净环境,运行一下

[root@server3 yum.repos.d]# kubeadm reset
[reset] WARNING: changes made to this host by ‘kubeadm init’ or ‘kubeadm join’ will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in “/var/lib/kubelet”
[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[root@server3 yum.repos.d]# rm -rf /var/lib/cni

[root@server3 yum.repos.d]# ifconfig flannel.1 down && ip link delete flannel.1
[root@server3 yum.repos.d]#
[root@server3 yum.repos.d]#
[root@server3 yum.repos.d]# ifconfig cni0 down && ip link delete cni0
cni0: ERROR while getting interface flags: No such device

再来一遍,现在对了

[root@server3 yum.repos.d]# kubeadm init –apiserver-advertise-address 176.204.66.113 –pod-network-cidr=10.244.0.0/16
I0521 16:46:56.439596 15220 version.go:236] remote version is much newer: v1.14.2; falling back to: stable-1.12
[init] using Kubernetes version: v1.12.8
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[preflight] Activating the kubelet service
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [server3 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [server3 localhost] and IPs [176.204.66.103 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [server3 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 176.204.66.103]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in “/etc/kubernetes/pki”
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf”
[controlplane] wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 25.503495 seconds
[uploadconfig] storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.12” in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node server3 as master by adding the label “node-role.kubernetes.io/master=””
[markmaster] Marking the node server3 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “server3” as an annotation
[bootstraptoken] using token: au85dy.56wx3mc5mqyxsam9
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 176.204.66.103:6443 –token au85dy.56wx3mc5mqyxsam9 –discovery-token-ca-cert-hash sha256:43362c8c646283747d22a2b053cd3eff4f2753f0dc494f8aab75435b405abc20

[root@server3 yum.repos.d]#

发表在 kubernetes | 标签为 | 留下评论

基于google仓库,使用kubeadm创建k8s集群

1、配置基础环境,安装docker

[root@localhost ~]

# yum install -y docker
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 13 kB 00:00:00

  • base: mirrors.xtom.com

  • epel: mirror.seas.harvard.edu

  • extras: mirrors.sonic.net

  • updates: mirrors.sonic.net

    base | 3.6 kB 00:00:00

    epel | 4.7 kB 00:00:00

    extras | 3.4 kB 00:00:00

    updates | 3.4 kB 00:00:00

    epel/x86_64/primary_db FAILED ] 261 kB/s | 1.2 MB 00:00:45 ETA

    http://reflector.westga.edu/repos/Fedora-EPEL/7/x86_64/repodata/2d8887f8e5e4cf6ea471191508205ef09e9fc593d7bd802c8d1d477907155a7c-primary.sqlite.bz2: [Errno 14] HTTP Error 404 – Not Found

    Trying other mirror.

    To address this issue please refer to the below wiki article

https://wiki.centos.org/yum-errors

If above article doesn’t help to resolve this issue please use https://bugs.centos.org/.

(1/3): epel/x86_64/updateinfo | 998 kB 00:00:03
(2/3): updates/7/x86_64/primary_db | 5.0 MB 00:00:03
(3/3): epel/x86_64/primary_db | 6.7 MB 00:00:04
Resolving Dependencies
–> Running transaction check
—> Package docker.x86_64 2:1.13.1-96.gitb2f74b2.el7.centos will be installed
–> Processing Dependency: docker-common = 2:1.13.1-96.gitb2f74b2.el7.centos for package: 2:docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Processing Dependency: docker-client = 2:1.13.1-96.gitb2f74b2.el7.centos for package: 2:docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Running transaction check
—> Package docker-client.x86_64 2:1.13.1-96.gitb2f74b2.el7.centos will be installed
—> Package docker-common.x86_64 2:1.13.1-96.gitb2f74b2.el7.centos will be installed
–> Processing Dependency: skopeo-containers >= 1:0.1.26-2 for package: 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Processing Dependency: oci-umount >= 2:2.3.3-3 for package: 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Processing Dependency: oci-systemd-hook >= 1:0.1.4-9 for package: 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Processing Dependency: oci-register-machine >= 1:0-5.13 for package: 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Processing Dependency: container-storage-setup >= 0.9.0-1 for package: 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Processing Dependency: container-selinux >= 2:2.51-1 for package: 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Processing Dependency: atomic-registries for package: 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64
–> Running transaction check
—> Package atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos will be installed
–> Processing Dependency: python-yaml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
–> Processing Dependency: python-pytoml for package: 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64
—> Package container-selinux.noarch 2:2.95-2.el7_6 will be installed
–> Processing Dependency: policycoreutils-python for package: 2:container-selinux-2.95-2.el7_6.noarch
—> Package container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7 will be installed
—> Package containers-common.x86_64 1:0.1.35-2.git404c5bd.el7.centos will be installed
—> Package oci-register-machine.x86_64 1:0-6.git2b44233.el7 will be installed
—> Package oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 will be installed
—> Package oci-umount.x86_64 2:2.3.4-2.git87f9237.el7 will be installed
–> Running transaction check
—> Package PyYAML.x86_64 0:3.10-11.el7 will be installed
–> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-11.el7.x86_64
—> Package policycoreutils-python.x86_64 0:2.5-29.el7_6.1 will be installed
–> Processing Dependency: setools-libs >= 3.3.8-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: libsemanage-python >= 2.5-14 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: audit-libs-python >= 2.1.3-4 for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: python-IPy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: libqpol.so.1(VERS_1.4)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: libqpol.so.1(VERS_1.2)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: libapol.so.4(VERS_4.0)(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: checkpolicy for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: libqpol.so.1()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
–> Processing Dependency: libapol.so.4()(64bit) for package: policycoreutils-python-2.5-29.el7_6.1.x86_64
—> Package python-pytoml.noarch 0:0.1.14-1.git7dea353.el7 will be installed
–> Running transaction check
—> Package audit-libs-python.x86_64 0:2.8.4-4.el7 will be installed
—> Package checkpolicy.x86_64 0:2.5-8.el7 will be installed
—> Package libsemanage-python.x86_64 0:2.5-14.el7 will be installed
—> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed
—> Package python-IPy.noarch 0:0.75-6.el7 will be installed
—> Package setools-libs.x86_64 0:3.3.8-4.el7 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================================

Package Arch Version Repository Size

Installing:
docker x86_64 2:1.13.1-96.gitb2f74b2.el7.centos extras 18 M
Installing for dependencies:
PyYAML x86_64 3.10-11.el7 base 153 k
atomic-registries x86_64 1:1.22.1-26.gitb507039.el7.centos extras 35 k
audit-libs-python x86_64 2.8.4-4.el7 base 76 k
checkpolicy x86_64 2.5-8.el7 base 295 k
container-selinux noarch 2:2.95-2.el7_6 extras 39 k
container-storage-setup noarch 0.11.0-2.git5eaf76c.el7 extras 35 k
containers-common x86_64 1:0.1.35-2.git404c5bd.el7.centos extras 21 k
docker-client x86_64 2:1.13.1-96.gitb2f74b2.el7.centos extras 3.9 M
docker-common x86_64 2:1.13.1-96.gitb2f74b2.el7.centos extras 96 k
libsemanage-python x86_64 2.5-14.el7 base 113 k
libyaml x86_64 0.1.4-11.el7_0 base 55 k
oci-register-machine x86_64 1:0-6.git2b44233.el7 extras 1.1 M
oci-systemd-hook x86_64 1:0.1.18-3.git8787307.el7_6 extras 34 k
oci-umount x86_64 2:2.3.4-2.git87f9237.el7 extras 32 k
policycoreutils-python x86_64 2.5-29.el7_6.1 updates 456 k
python-IPy noarch 0.75-6.el7 base 32 k
python-pytoml noarch 0.1.14-1.git7dea353.el7 extras 18 k
setools-libs x86_64 3.3.8-4.el7 base 620 k

Transaction Summary

Install 1 Package (+18 Dependent packages)

Total download size: 25 M
Installed size: 87 M
Downloading packages:
(1/19): audit-libs-python-2.8.4-4.el7.x86_64.rpm | 76 kB 00:00:01
(2/19): atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64.rpm | 35 kB 00:00:01
(3/19): container-selinux-2.95-2.el7_6.noarch.rpm | 39 kB 00:00:01
(4/19): PyYAML-3.10-11.el7.x86_64.rpm | 153 kB 00:00:01
(5/19): containers-common-0.1.35-2.git404c5bd.el7.centos.x86_64.rpm | 21 kB 00:00:00
(6/19): docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64.rpm | 96 kB 00:00:00
(7/19): libsemanage-python-2.5-14.el7.x86_64.rpm | 113 kB 00:00:01
(8/19): docker-client-1.13.1-96.gitb2f74b2.el7.centos.x86_64.rpm | 3.9 MB 00:00:02
(9/19): libyaml-0.1.4-11.el7_0.x86_64.rpm | 55 kB 00:00:00
(10/19): oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64.rpm | 34 kB 00:00:00
(11/19): oci-umount-2.3.4-2.git87f9237.el7.x86_64.rpm | 32 kB 00:00:00
(12/19): policycoreutils-python-2.5-29.el7_6.1.x86_64.rpm | 456 kB 00:00:00
(13/19): docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64.rpm | 18 MB 00:00:04
(14/19): python-IPy-0.75-6.el7.noarch.rpm | 32 kB 00:00:00
(15/19): python-pytoml-0.1.14-1.git7dea353.el7.noarch.rpm | 18 kB 00:00:00
(16/19): setools-libs-3.3.8-4.el7.x86_64.rpm | 620 kB 00:00:00
(17/19): oci-register-machine-0-6.git2b44233.el7.x86_64.rpm | 1.1 MB 00:00:03
container-storage-setup-0.11.0 FAILED
http://centos.sonn.com/7.6.1810/extras/x86_64/Packages/container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch.rpm: [Errno 14] curl#6 – “Could not resolve host: centos.sonn.com; Unknown error”
Trying other mirror.
(18/19): container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch.rpm | 35 kB 00:00:00

(19/19): checkpolicy-2.5-8.el7.x86_64.rpm | 295 kB 00:00:11

Total 2.1 MB/s | 25 MB 00:00:11
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64 1/19
Installing : setools-libs-3.3.8-4.el7.x86_64 2/19
Installing : 1:containers-common-0.1.35-2.git404c5bd.el7.centos.x86_64 3/19
Installing : checkpolicy-2.5-8.el7.x86_64 4/19
Installing : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch 5/19
Installing : python-IPy-0.75-6.el7.noarch 6/19
Installing : 1:oci-register-machine-0-6.git2b44233.el7.x86_64 7/19
Installing : libsemanage-python-2.5-14.el7.x86_64 8/19
Installing : libyaml-0.1.4-11.el7_0.x86_64 9/19
Installing : PyYAML-3.10-11.el7.x86_64 10/19
Installing : python-pytoml-0.1.14-1.git7dea353.el7.noarch 11/19
Installing : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64 12/19
Installing : audit-libs-python-2.8.4-4.el7.x86_64 13/19
Installing : policycoreutils-python-2.5-29.el7_6.1.x86_64 14/19
Installing : 2:container-selinux-2.95-2.el7_6.noarch 15/19
setsebool: SELinux is disabled.
Installing : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64 16/19
Installing : 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64 17/19
Installing : 2:docker-client-1.13.1-96.gitb2f74b2.el7.centos.x86_64 18/19
Installing : 2:docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64 19/19
Verifying : 2:container-selinux-2.95-2.el7_6.noarch 1/19
Verifying : 2:oci-umount-2.3.4-2.git87f9237.el7.x86_64 2/19
Verifying : audit-libs-python-2.8.4-4.el7.x86_64 3/19
Verifying : python-pytoml-0.1.14-1.git7dea353.el7.noarch 4/19
Verifying : 2:docker-1.13.1-96.gitb2f74b2.el7.centos.x86_64 5/19
Verifying : libyaml-0.1.4-11.el7_0.x86_64 6/19
Verifying : 2:docker-client-1.13.1-96.gitb2f74b2.el7.centos.x86_64 7/19
Verifying : libsemanage-python-2.5-14.el7.x86_64 8/19
Verifying : 1:oci-register-machine-0-6.git2b44233.el7.x86_64 9/19
Verifying : python-IPy-0.75-6.el7.noarch 10/19
Verifying : container-storage-setup-0.11.0-2.git5eaf76c.el7.noarch 11/19
Verifying : checkpolicy-2.5-8.el7.x86_64 12/19
Verifying : 2:docker-common-1.13.1-96.gitb2f74b2.el7.centos.x86_64 13/19
Verifying : 1:containers-common-0.1.35-2.git404c5bd.el7.centos.x86_64 14/19
Verifying : policycoreutils-python-2.5-29.el7_6.1.x86_64 15/19
Verifying : PyYAML-3.10-11.el7.x86_64 16/19
Verifying : 1:atomic-registries-1.22.1-26.gitb507039.el7.centos.x86_64 17/19
Verifying : setools-libs-3.3.8-4.el7.x86_64 18/19
Verifying : 1:oci-systemd-hook-0.1.18-3.git8787307.el7_6.x86_64 19/19

Installed:
docker.x86_64 2:1.13.1-96.gitb2f74b2.el7.centos

Dependency Installed:
PyYAML.x86_64 0:3.10-11.el7 atomic-registries.x86_64 1:1.22.1-26.gitb507039.el7.centos
audit-libs-python.x86_64 0:2.8.4-4.el7 checkpolicy.x86_64 0:2.5-8.el7
container-selinux.noarch 2:2.95-2.el7_6 container-storage-setup.noarch 0:0.11.0-2.git5eaf76c.el7
containers-common.x86_64 1:0.1.35-2.git404c5bd.el7.centos docker-client.x86_64 2:1.13.1-96.gitb2f74b2.el7.centos
docker-common.x86_64 2:1.13.1-96.gitb2f74b2.el7.centos libsemanage-python.x86_64 0:2.5-14.el7
libyaml.x86_64 0:0.1.4-11.el7_0 oci-register-machine.x86_64 1:0-6.git2b44233.el7
oci-systemd-hook.x86_64 1:0.1.18-3.git8787307.el7_6 oci-umount.x86_64 2:2.3.4-2.git87f9237.el7
policycoreutils-python.x86_64 0:2.5-29.el7_6.1 python-IPy.noarch 0:0.75-6.el7
python-pytoml.noarch 0:0.1.14-1.git7dea353.el7 setools-libs.x86_64 0:3.3.8-4.el7

Complete!

关闭swap分区,安装bridge

[root@localhost ~]

# swapoff -a

[root@localhost ~]

# yum install bridge-utils -y
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirrors.xtom.com

  • epel: mirror.umd.edu

  • extras: mirror.sjc02.svwh.net

  • updates: mirrors.ocf.berkeley.edu

    Package bridge-utils-1.5-9.el7.x86_64 already installed and latest version

    Nothing to do

配置k8s相关内核设置

[root@localhost ~]

# cat < /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

[root@localhost ~]

# sysctl –system

  • Applying /usr/lib/sysctl.d/00-system.conf …

  • Applying /usr/lib/sysctl.d/10-default-yama-scope.conf …

    kernel.yama.ptrace_scope = 0

  • Applying /usr/lib/sysctl.d/50-default.conf …

    kernel.sysrq = 16

    kernel.core_uses_pid = 1

    net.ipv4.conf.default.rp_filter = 1

    net.ipv4.conf.all.rp_filter = 1

    net.ipv4.conf.default.accept_source_route = 0

    net.ipv4.conf.all.accept_source_route = 0

    net.ipv4.conf.default.promote_secondaries = 1

    net.ipv4.conf.all.promote_secondaries = 1

    fs.protected_hardlinks = 1

    fs.protected_symlinks = 1

  • Applying /usr/lib/sysctl.d/60-libvirtd.conf …

    fs.aio-max-nr = 1048576

  • Applying /usr/lib/sysctl.d/99-docker.conf …

    fs.may_detach_mounts = 1

  • Applying /etc/sysctl.d/99-sysctl.conf …

  • Applying /etc/sysctl.d/k8s.conf …

  • Applying /etc/sysctl.conf …

[root@localhost ~]

# lsmod | grep br_netfilter

[root@localhost ~]

# modprobe br_netfilter

[root@localhost ~]

# lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 151336 1 br_netfilter

2、安装kubeadm相关工具,由于没有源,报错

[root@localhost ~]

# yum install -y kubelet kubeadm kubectl
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirrors.xtom.com

  • epel: mirror.sjc02.svwh.net

  • extras: mirror.sjc02.svwh.net

  • updates: mirrors.ocf.berkeley.edu

    No package kubelet available.

    No package kubeadm available.

    No package kubectl available.

    Error: Nothing to do

添加google的源,再来安装

[root@localhost ~]

# cat < /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

[root@localhost ~]

# yum repolist
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirrors.xtom.com

  • epel: mirrors.rit.edu

  • extras: mirror.sjc02.svwh.net

  • updates: mirrors.sonic.net

    kubernetes/signature | 454 B 00:00:00

    Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg

    Importing GPG key 0xA7317B0F:

    Userid : “Google Cloud Packages Automatic Signing Key [email protected]

    Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f

    From : https://packages.cloud.google.com/yum/doc/yum-key.gpg

    Is this ok [y/N]: y

    Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

    kubernetes/signature | 1.4 kB 00:00:02 !!!

    kubernetes/primary | 49 kB 00:00:01

    kubernetes 351/351

    repo id repo name status

    base/7/x86_64 CentOS-7 – Base 10,019

    epel/x86_64 Extra Packages for Enterprise Linux 7 – x86_64 13,190

    extras/7/x86_64 CentOS-7 – Extras 413

    kubernetes Kubernetes 7+344

    updates/7/x86_64 CentOS-7 – Updates 1,945

    repolist: 25,574

安装kubeadm相关工具

[root@localhost ~]

# yum install -y kubelet kubeadm kubectl –disableexcludes=kubernetes
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirrors.xtom.com

  • epel: mirror.umd.edu

  • extras: mirror.sjc02.svwh.net

  • updates: mirrors.ocf.berkeley.edu

    Resolving Dependencies

    –> Running transaction check

    —> Package kubeadm.x86_64 0:1.14.2-0 will be installed

    –> Processing Dependency: kubernetes-cni >= 0.7.5 for package: kubeadm-1.14.2-0.x86_64

    –> Processing Dependency: cri-tools >= 1.11.0 for package: kubeadm-1.14.2-0.x86_64

    —> Package kubectl.x86_64 0:1.14.2-0 will be installed

    —> Package kubelet.x86_64 0:1.14.2-0 will be installed

    –> Processing Dependency: socat for package: kubelet-1.14.2-0.x86_64

    –> Processing Dependency: conntrack for package: kubelet-1.14.2-0.x86_64

    –> Running transaction check

    —> Package conntrack-tools.x86_64 0:1.4.4-4.el7 will be installed

    –> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64

    –> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64

    –> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64

    –> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64

    –> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64

    –> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64

    —> Package cri-tools.x86_64 0:1.12.0-0 will be installed

    —> Package kubernetes-cni.x86_64 0:0.7.5-0 will be installed

    —> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed

    –> Running transaction check

    —> Package libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 will be installed

    —> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 will be installed

    —> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed

    –> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================================

Package Arch Version Repository Size

Installing:
kubeadm x86_64 1.14.2-0 kubernetes 8.7 M
kubectl x86_64 1.14.2-0 kubernetes 9.5 M
kubelet x86_64 1.14.2-0 kubernetes 23 M
Installing for dependencies:
conntrack-tools x86_64 1.4.4-4.el7 base 186 k
cri-tools x86_64 1.12.0-0 kubernetes 4.2 M
kubernetes-cni x86_64 0.7.5-0 kubernetes 10 M
libnetfilter_cthelper x86_64 1.0.0-9.el7 base 18 k
libnetfilter_cttimeout x86_64 1.0.0-6.el7 base 18 k
libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k
socat x86_64 1.7.3.2-2.el7 base 290 k

Transaction Summary

Install 3 Packages (+7 Dependent packages)

Total download size: 56 M
Installed size: 256 M
Downloading packages:
(1/10): conntrack-tools-1.4.4-4.el7.x86_64.rpm | 186 kB 00:00:01
warning: /var/cache/yum/x86_64/7/kubernetes/packages/53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY
Public key for 53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm is not installed
(2/10): 53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-1.12.0-0.x86_64.rpm | 4.2 MB 00:00:04
(3/10): de639995840837d724cc5a4816733d5aef5a6bf384eaff22c786def53fb4e1d5-kubeadm-1.14.2-0.x86_64.rpm | 8.7 MB 00:00:05
(4/10): 7adc7890a14396a4ae88e7b8ed44c855c7d44dc3eefb98e4c729b99c2df6fa03-kubectl-1.14.2-0.x86_64.rpm | 9.5 MB 00:00:02
(5/10): libnetfilter_cttimeout-1.0.0-6.el7.x86_64.rpm | 18 kB 00:00:00
(6/10): libnetfilter_cthelper-1.0.0-9.el7.x86_64.rpm | 18 kB 00:00:01
(7/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm | 23 kB 00:00:01
(8/10): 548a0dcd865c16a50980420ddfa5fbccb8b59621179798e6dc905c9bf8af3b34-kubernetes-cni-0.7.5-0.x86_64.rpm | 10 MB 00:00:02
(9/10): socat-1.7.3.2-2.el7.x86_64.rpm | 290 kB 00:00:01

(10/10): 1a181064b472261b78b534b5a233a4c73d505673c02acbe01d95db819940006e-kubelet-1.14.2-0.x86_64.rpm | 23 MB 00:00:03

Total 5.8 MB/s | 56 MB 00:00:09
Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg
Importing GPG key 0xA7317B0F:
Userid : “Google Cloud Packages Automatic Signing Key [email protected]
Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
From : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Importing GPG key 0x3E1BA8D5:
Userid : “Google Cloud Packages RPM Signing Key [email protected]
Fingerprint: 3749 e1ba 95a8 6ce0 5454 6ed2 f09c 394c 3e1b a8d5
From : https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : socat-1.7.3.2-2.el7.x86_64 1/10
Installing : libnetfilter_cthelper-1.0.0-9.el7.x86_64 2/10
Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64 3/10
Installing : kubectl-1.14.2-0.x86_64 4/10
Installing : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 5/10
Installing : conntrack-tools-1.4.4-4.el7.x86_64 6/10
Installing : kubernetes-cni-0.7.5-0.x86_64 7/10
Installing : kubelet-1.14.2-0.x86_64 8/10
Installing : cri-tools-1.12.0-0.x86_64 9/10
Installing : kubeadm-1.14.2-0.x86_64 10/10
Verifying : cri-tools-1.12.0-0.x86_64 1/10
Verifying : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 2/10
Verifying : kubectl-1.14.2-0.x86_64 3/10
Verifying : libnetfilter_queue-1.0.2-2.el7_2.x86_64 4/10
Verifying : kubeadm-1.14.2-0.x86_64 5/10
Verifying : libnetfilter_cthelper-1.0.0-9.el7.x86_64 6/10
Verifying : kubelet-1.14.2-0.x86_64 7/10
Verifying : kubernetes-cni-0.7.5-0.x86_64 8/10
Verifying : socat-1.7.3.2-2.el7.x86_64 9/10
Verifying : conntrack-tools-1.4.4-4.el7.x86_64 10/10

Installed:
kubeadm.x86_64 0:1.14.2-0 kubectl.x86_64 0:1.14.2-0 kubelet.x86_64 0:1.14.2-0

Dependency Installed:
conntrack-tools.x86_64 0:1.4.4-4.el7 cri-tools.x86_64 0:1.12.0-0 kubernetes-cni.x86_64 0:0.7.5-0
libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
socat.x86_64 0:1.7.3.2-2.el7

Complete!

3、初始化master,可以看到,由于忘了启动docker,报错了

[root@localhost ~]

# kubeadm init –apiserver-advertise-address 192.168.0.205 –pod-network-cidr=10.244.0.0/16

[init]

Using Kubernetes version: v1.14.2

[preflight]

Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run ‘systemctl enable docker.service’

[preflight]

The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 3.10.0-957.12.1.el7.x86_64
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled (as module)
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: not set – Required for aufs.
CONFIG_BLK_DEV_DM: enabled (as module)
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
, error: exit status 1
[ERROR Service-Docker]: docker service is not active, please run ‘systemctl start docker.service’
[ERROR IsDockerSystemdCheck]: cannot execute ‘docker info’: exit status 1
[ERROR SystemVerification]: failed to get docker info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

[preflight]

If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...

启动docker,然后再试一次

[root@localhost ~]

# systemctl restart docker

[root@localhost ~]

# kubeadm init –apiserver-advertise-address 192.168.0.205 –pod-network-cidr=10.244.0.0/16

[init]

Using Kubernetes version: v1.14.2

[preflight]

Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run ‘systemctl enable docker.service’
[WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’

[preflight]

Pulling images required for setting up a Kubernetes cluster

[preflight]

This might take a minute or two, depending on the speed of your internet connection

[preflight]

You can also perform this action in beforehand using ‘kubeadm config images pull’

[kubelet-start]

Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet-start]

Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet-start]

Activating the kubelet service

[certs]

Using certificateDir folder “/etc/kubernetes/pki”

[certs]

Generating “etcd/ca” certificate and key

[certs]

Generating “etcd/server” certificate and key

[certs]

etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.0.205 127.0.0.1 ::1]

[certs]

Generating “etcd/healthcheck-client” certificate and key

[certs]

Generating “apiserver-etcd-client” certificate and key

[certs]

Generating “etcd/peer” certificate and key

[certs]

etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.0.205 127.0.0.1 ::1]

[certs]

Generating “ca” certificate and key

[certs]

Generating “apiserver” certificate and key

[certs]

apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.205]

[certs]

Generating “apiserver-kubelet-client” certificate and key

[certs]

Generating “front-proxy-ca” certificate and key

[certs]

Generating “front-proxy-client” certificate and key

[certs]

Generating “sa” key and public key

[kubeconfig]

Using kubeconfig folder “/etc/kubernetes”

[kubeconfig]

Writing “admin.conf” kubeconfig file

[kubeconfig]

Writing “kubelet.conf” kubeconfig file

[kubeconfig]

Writing “controller-manager.conf” kubeconfig file

[kubeconfig]

Writing “scheduler.conf” kubeconfig file

[control-plane]

Using manifest folder “/etc/kubernetes/manifests”

[control-plane]

Creating static Pod manifest for “kube-apiserver”

[control-plane]

Creating static Pod manifest for “kube-controller-manager”

[control-plane]

Creating static Pod manifest for “kube-scheduler”

[etcd]

Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”

[wait-control-plane]

Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s

[apiclient]

All control plane components are healthy after 16.005607 seconds

[upload-config]

storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace

[kubelet]

Creating a ConfigMap “kubelet-config-1.14” in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs]

Skipping phase. Please see –experimental-upload-certs

[mark-control-plane]

Marking the node localhost.localdomain as control-plane by adding the label “node-role.kubernetes.io/master=””

[mark-control-plane]

Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[bootstrap-token]

Using token: edidaa.umann314693vc46u

[bootstrap-token]

Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token]

configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token]

configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token]

configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token]

creating the “cluster-info” ConfigMap in the “kube-public” namespace

[addons]

Applied essential addon: CoreDNS

[addons]

Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.205:6443 –token edidaa.umann314693vc46u \
–discovery-token-ca-cert-hash sha256:d3cec87bf46c35cdd379e8a23e55716a7b9f5520207519b2f47db6ff638ebf01
安装成功

4、配置kubectl
按照提示,我们应当使用普通用户来操作kubectl命令,现在添加一个fencatn用户,密码设置为123456

[root@localhost ~]

# useradd fencatn

[root@localhost ~]

# echo ‘123456’ | passwd –stdin fencatn
Changing password for user fencatn.
passwd: all authentication tokens updated successfully.
切换到fencatn用户开始操作

[root@localhost ~]

# su – fencatn

[fencatn@localhost ~]

$ echo $HOME
/home/fencatn

[fencatn@localhost ~]

$ mkdir -p $HOME/.kube
忘了配置fencatn的sudo权限,只好先回去配了

[fencatn@localhost ~]

$ exit
logout
You have new mail in /var/spool/mail/root

[root@localhost ~]

# vim /etc/sudoers

[root@localhost ~]

# su – fencatn
Last login: Fri May 17 09:23:09 EDT 2019 on pts/0

[fencatn@localhost ~]

$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[sudo]

password for fencatn:

[fencatn@localhost ~]

$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

[fencatn@localhost ~]

$ ll /home/fencatn/.kube/config
-rw——- 1 fencatn fencatn 5453 May 17 09:26 /home/fencatn/.kube/config

为了方便快捷使用,可以启用kubectl的命令补全功能(其实就是把kubectl 的所有命令调出来,然后source一下)

[fencatn@localhost ~]

$ tail -1 .bashrc
source <(kubectl completion bash)

5、开始使用之前,配置主机名,这个确实是一开始就忘了,抱歉
我的是4节点,对应是k8s-master k8s-node1 k8s-node2 k8s-node3
192.168.0.205 k8s-master
192.168.0.206 k8s-node1
192.168.0.207 k8s-node2
192.168.0.208 k8s-node3

[root@localhost ~]

# hostnamectl set-hostname k8s-mster

[root@localhost ~]

# vim /etc/hosts

[root@localhost ~]

# exit
logout
Connection closing…Socket close.

Connection closed by foreign host.

Disconnected from remote host(k8s-1) at 21:59:21.

Type `help’ to learn how to use Xshell prompt.
[C:~]$ ssh [email protected]

Connecting to 192.168.0.205:22…
Connection established.
To escape to local shell, press ‘Ctrl+Alt+]’.

Last login: Fri May 17 08:45:00 2019 from 192.168.0.15

[root@k8s-mster ~]

# ping k8s-node3
PING k8s-node3 (192.168.0.208) 56(84) bytes of data.
64 bytes from k8s-node3 (192.168.0.208): icmp_seq=1 ttl=64 time=1.96 ms
^C
— k8s-node3 ping statistics —
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.960/1.960/1.960/0.000 ms

其余3个节点自己改,我懒得复制了

6、安装pod网络,这里我们使用的是flannel

[fencatn@k8s-mster ~]

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

我估计你们有的人上不去这个配置文件,我复制下来你们参考下
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
– configMap
– secret
– emptyDir
– hostPath
allowedHostPaths:
– pathPrefix: “/etc/cni/net.d”
– pathPrefix: “/etc/kube-flannel”
– pathPrefix: “/run/flannel”
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: [‘NET_ADMIN’]
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:

  • min: 0

    max: 65535

    # SELinux

    seLinux:

    # SELinux is unsed in CaaSP

rule: ‘RunAsAny’

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:

  • apiGroups: [‘extensions’]

    resources: [‘podsecuritypolicies’]

    verbs: [‘use’]

    resourceNames: [‘psp.flannel.unprivileged’]

  • apiGroups:

    • “”

      resources:

    • pods

      verbs:

    • get

  • apiGroups:

    • “”

      resources:

    • nodes

      verbs:

    • list

    • watch

  • apiGroups:

    • “”

      resources:

    • nodes/status

      verbs:

– patch

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:

  • kind: ServiceAccount

    name: flannel

namespace: kube-system

apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel

namespace: kube-system

kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
“name”: “cbr0”,
“plugins”: [
{
“type”: “flannel”,
“delegate”: {
“hairpinMode”: true,
“isDefaultGateway”: true
}
},
{
“type”: “portmap”,
“capabilities”: {
“portMappings”: true
}
}
]
}
net-conf.json: |
{
“Network”: “10.244.0.0/16”,
“Backend”: {
“Type”: “vxlan”
}

}

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
– operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
– name: install-cni
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
– cp
args:
– -f
– /etc/kube-flannel/cni-conf.json
– /etc/cni/net.d/10-flannel.conflist
volumeMounts:
– name: cni
mountPath: /etc/cni/net.d
– name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
– name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
– /opt/bin/flanneld
args:
– –ip-masq
– –kube-subnet-mgr
resources:
requests:
cpu: “100m”
memory: “50Mi”
limits:
cpu: “100m”
memory: “50Mi”
securityContext:
privileged: false
capabilities:
add: [“NET_ADMIN”]
env:
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
– name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
– name: run
mountPath: /run/flannel
– name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
– name: run
hostPath:
path: /run/flannel
– name: cni
hostPath:
path: /etc/cni/net.d
– name: flannel-cfg
configMap:

name: kube-flannel-cfg

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm64
tolerations:
– operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
– name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
– cp
args:
– -f
– /etc/kube-flannel/cni-conf.json
– /etc/cni/net.d/10-flannel.conflist
volumeMounts:
– name: cni
mountPath: /etc/cni/net.d
– name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
– name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
– /opt/bin/flanneld
args:
– –ip-masq
– –kube-subnet-mgr
resources:
requests:
cpu: “100m”
memory: “50Mi”
limits:
cpu: “100m”
memory: “50Mi”
securityContext:
privileged: false
capabilities:
add: [“NET_ADMIN”]
env:
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
– name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
– name: run
mountPath: /run/flannel
– name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
– name: run
hostPath:
path: /run/flannel
– name: cni
hostPath:
path: /etc/cni/net.d
– name: flannel-cfg
configMap:

name: kube-flannel-cfg

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: arm
tolerations:
– operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
– name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm
command:
– cp
args:
– -f
– /etc/kube-flannel/cni-conf.json
– /etc/cni/net.d/10-flannel.conflist
volumeMounts:
– name: cni
mountPath: /etc/cni/net.d
– name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
– name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm
command:
– /opt/bin/flanneld
args:
– –ip-masq
– –kube-subnet-mgr
resources:
requests:
cpu: “100m”
memory: “50Mi”
limits:
cpu: “100m”
memory: “50Mi”
securityContext:
privileged: false
capabilities:
add: [“NET_ADMIN”]
env:
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
– name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
– name: run
mountPath: /run/flannel
– name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
– name: run
hostPath:
path: /run/flannel
– name: cni
hostPath:
path: /etc/cni/net.d
– name: flannel-cfg
configMap:

name: kube-flannel-cfg

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: ppc64le
tolerations:
– operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
– name: install-cni
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
– cp
args:
– -f
– /etc/kube-flannel/cni-conf.json
– /etc/cni/net.d/10-flannel.conflist
volumeMounts:
– name: cni
mountPath: /etc/cni/net.d
– name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
– name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
– /opt/bin/flanneld
args:
– –ip-masq
– –kube-subnet-mgr
resources:
requests:
cpu: “100m”
memory: “50Mi”
limits:
cpu: “100m”
memory: “50Mi”
securityContext:
privileged: false
capabilities:
add: [“NET_ADMIN”]
env:
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
– name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
– name: run
mountPath: /run/flannel
– name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
– name: run
hostPath:
path: /run/flannel
– name: cni
hostPath:
path: /etc/cni/net.d
– name: flannel-cfg
configMap:

name: kube-flannel-cfg

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: s390x
tolerations:
– operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
– name: install-cni
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
– cp
args:
– -f
– /etc/kube-flannel/cni-conf.json
– /etc/cni/net.d/10-flannel.conflist
volumeMounts:
– name: cni
mountPath: /etc/cni/net.d
– name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
– name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
– /opt/bin/flanneld
args:
– –ip-masq
– –kube-subnet-mgr
resources:
requests:
cpu: “100m”
memory: “50Mi”
limits:
cpu: “100m”
memory: “50Mi”
securityContext:
privileged: false
capabilities:
add: [“NET_ADMIN”]
env:
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
– name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
– name: run
mountPath: /run/flannel
– name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
– name: run
hostPath:
path: /run/flannel
– name: cni
hostPath:
path: /etc/cni/net.d
– name: flannel-cfg
configMap:
name: kube-flannel-cfg

7、添加node节点
我以其中一个节点为例,其它节点是一样的,其实添加命令在初始化master的时候,最后那个地方已经提示了,直接复制粘贴就好

[root@k8s-node1 ~]

# kubeadm join 192.168.0.205:6443 –token edidaa.umann314693vc46u \

--discovery-token-ca-cert-hash sha256:d3cec87bf46c35cdd379e8a23e55716a7b9f5520207519b2f47db6ff638ebf01

[preflight]

Running pre-flight checks
[WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’

[preflight]

Reading configuration from the cluster…

[preflight]

FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’

[kubelet-start]

Downloading configuration for the kubelet from the “kubelet-config-1.14” ConfigMap in the kube-system namespace

[kubelet-start]

Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet-start]

Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet-start]

Activating the kubelet service

[kubelet-start]

Waiting for the kubelet to perform the TLS Bootstrap…

This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.

  • The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

[root@k8s-node1 ~]

#

8、验证安装

[fencatn@k8s-mster ~]

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 NotReady 75s v1.14.2
k8s-node2 NotReady 46s v1.14.2
k8s-node3 NotReady 13s v1.14.2
localhost.localdomain Ready master 75m v1.14.2

[fencatn@k8s-mster ~]

$ kubectl get pod –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-fb8b8dccf-997w2 1/1 Running 0 76m
kube-system coredns-fb8b8dccf-qdqx9 1/1 Running 0 76m
kube-system etcd-localhost.localdomain 1/1 Running 0 76m
kube-system kube-apiserver-localhost.localdomain 1/1 Running 0 75m
kube-system kube-controller-manager-localhost.localdomain 1/1 Running 0 75m
kube-system kube-flannel-ds-amd64-4gp9x 1/1 Running 2 3m9s
kube-system kube-flannel-ds-amd64-spnht 1/1 Running 0 23m
kube-system kube-flannel-ds-amd64-vgssj 1/1 Running 0 2m40s
kube-system kube-flannel-ds-amd64-zx72c 1/1 Running 3 2m7s
kube-system kube-proxy-jgqsg 1/1 Running 0 3m9s
kube-system kube-proxy-nfv49 1/1 Running 0 76m
kube-system kube-proxy-nvwdx 1/1 Running 0 2m40s
kube-system kube-proxy-ptqc5 1/1 Running 0 2m7s
kube-system kube-scheduler-localhost.localdomain 1/1 Running 0 76m

[fencatn@k8s-mster ~]

$

如果是pending ContainerCreating ImagePullBackOff 都说明pod没有就绪,Running才是正常的,如果有问题,describe pod 来具体查看一下,注意,要使用-n来指定namespace

[fencatn@k8s-mster ~]

$ kubectl describe pod kube-controller-manager-localhost.localdomain -n kube-system
Name: kube-controller-manager-localhost.localdomain
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: localhost.localdomain/192.168.0.205
Start Time: Fri, 17 May 2019 09:09:40 -0400
Labels: component=kube-controller-manager
tier=control-plane
Annotations: kubernetes.io/config.hash: 59eb3b921f0bf7278134e458380d3d58
kubernetes.io/config.mirror: 59eb3b921f0bf7278134e458380d3d58
kubernetes.io/config.seen: 2019-05-17T09:09:40.01042606-04:00
kubernetes.io/config.source: file
Status: Running
IP: 192.168.0.205
Containers:
kube-controller-manager:
Container ID: docker://a1ac4f5833a9760674213f47edc01a0836606b45406fac415b5b325493a6ed18
Image: k8s.gcr.io/kube-controller-manager:v1.14.2
Image ID: docker-pullable://k8s.gcr.io/kube-controller-manager@sha256:51a382e90acd9d11d5571850312ad4b268db8b28b2868516dfda19a6933a095c
Port:
Host Port:
Command:
kube-controller-manager
–allocate-node-cidrs=true
–authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
–authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
–bind-address=127.0.0.1
–client-ca-file=/etc/kubernetes/pki/ca.crt
–cluster-cidr=10.244.0.0/16
–cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
–cluster-signing-key-file=/etc/kubernetes/pki/ca.key
–controllers=*,bootstrapsigner,tokencleaner
–kubeconfig=/etc/kubernetes/controller-manager.conf
–leader-elect=true
–node-cidr-mask-size=24
–requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
–root-ca-file=/etc/kubernetes/pki/ca.crt
–service-account-private-key-file=/etc/kubernetes/pki/sa.key
–use-service-account-credentials=true
State: Running
Started: Fri, 17 May 2019 09:09:41 -0400
Ready: True
Restart Count: 0
Requests:
cpu: 200m
Liveness: http-get http://127.0.0.1:10252/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment:
Mounts:
/etc/kubernetes/controller-manager.conf from kubeconfig (ro)
/etc/kubernetes/pki from k8s-certs (ro)
/etc/pki from etc-pki (ro)
/etc/ssl/certs from ca-certs (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
ca-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
HostPathType: DirectoryOrCreate
etc-pki:
Type: HostPath (bare host directory volume)
Path: /etc/pki
HostPathType: DirectoryOrCreate
k8s-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
HostPathType: DirectoryOrCreate
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/controller-manager.conf
HostPathType: FileOrCreate
QoS Class: Burstable
Node-Selectors:
Tolerations: :NoExecute
Events:

至此,部署完毕,下面介绍详细使用。

发表在 Docker | 标签为 | 留下评论

grep如何精确匹配

grep不加选项时不是精确匹配,例如:

[root@docker ~]

# ifconfig eth0
eth0: flags=4163 mtu 1500
inet 192.168.1.112 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::5054:ff:fe82:bc7a prefixlen 64 scopeid 0x20
ether 52:54:00:82:bc:7a txqueuelen 1000 (Ethernet)
RX packets 4219 bytes 317709 (310.2 KiB)
RX errors 0 dropped 9 overruns 0 frame 0
TX packets 1486 bytes 199942 (195.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@docker ~]

# ifconfig eth0 | grep inet
inet 192.168.1.112 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::5054:ff:fe82:bc7a prefixlen 64 scopeid 0x20

怎么精确匹配只有 inet的行呢,很简单,加-w

[root@docker ~]

# ifconfig eth0 | grep -w inet
inet 192.168.1.112 netmask 255.255.255.0 broadcast 192.168.1.255
发表在 grep | 标签为 | 留下评论

Linux系统下的pycharm快捷方式的创建

首先说明一下:对于pycharm的启动是部分系统的操作的思路是一样的;对于pycharm的快捷方式的创建也是同样的操作,大家只需要把相关的路径改成自己的路径就好。

以下是我安装成功的过程:

首先大家先去pycharm的官网下载pycharm,在上面点击download,里面有两个版本,区别是专业版的是收费的,即使你在网上找到了激活码或者免费的服务器等,人多了以后都是要被封掉的。社区办是免费的,两者在功能上好像相差无几(具体不太清楚,不支持破解),对于新手来说足够了。

一、首先是创建桌面快捷方式,在terminal中输入一下代码:注:打开gedit然后创建pycharm.desktop文件。

sudo gedit /usr/share/applications/Pycharm.desktop

二、其次是在打开的pycharm.desktop文件中输入以下内容。
[Desktop Entry]
Type=Application
Name=Pycharm
GenericName=Pycharm3
Comment=Pycharm3:The Python IDE
Exec=”/home/wodewenjian/Downloads/pycharm-community-2018.2/bin/pycharm.sh” %f
Icon=/home/wodewenjian/Downloads/pycharm-community-2018.2/bin/pycharm.png
Terminal=pycharm
Categories=Pycharm;
其中的Exec和Icon是你的pycharm.sh和pycharm.png两个文件的路径。具体可以通过打开你下载的pycharm的解压包里面的bin文件查看两个文件的属性。

三、到usr/share/application下找到pycharm图标,然后复制到桌面即可。(如果没有图标,大概率是你的路径添加不对。

四、到桌面双击打开即可。使用愉快!!

发表在 pycharm | 标签为 | 留下评论

docker pull 报错Get https://xxx.xxx.xxx.xxx:5000/v1/_ping: http: server gave HTTP response

docker pull报错,解决方法:

[root@docker mynginx]

# docker push 192.168.1.112:5000/test
The push refers to a repository [192.168.1.112:5000/test]
Get https://192.168.1.112:5000/v1/_ping: http: server gave HTTP response to HTTPS client

    运行命令:

echo '{ "insecure-registries":["xxx.xxx.xxx.xxx:5000"] }' > /etc/docker/daemon.json
systemctl restart docker

lets run again,its ok

[root@docker mynginx]

# docker push 192.168.1.112:5000/test
The push refers to a repository [192.168.1.112:5000/test]
e7d38b597dc1: Pushed
332fa54c5886: Pushed
6ba094226eea: Pushed
6270adb5794c: Pushed
latest: digest: sha256:147dd56bbd7f2db7862728b1a682259791aa11bd84969bf4c82542abef46059c size: 1155
发表在 Docker | 标签为 | 留下评论

docker WARNING: IPv4 forwarding is disabled. 解决方法

[root@docker mynginx]

# docker run -d -p 5000:5000 registry
WARNING: IPv4 forwarding is disabled. Networking will not work.
38c2e912d402a15bedb1a13896d40a7a597dc5944c1f6868e9b7c62634e4c906

解决办法:

在宿主机上面执行:

# net.ipv4.ip_forward=1 >> /usr/lib/sysctl.d/00-system.conf

重启network和docker服务

# systemctl restart network && systemctl restart docker
发表在 Docker | 标签为 | 留下评论

PHP优化详解

转自https://www.cnblogs.com/yueminghai/p/8657861.html

上一节我们说到PHP5.6.30在CentOS7.0下的整个安装过程,http://www.cnblogs.com/riverdubu/p/6428226.html

今天我来和大家讲解一下PHP-FPM安装的一些配置和调优。

PHP-FPM配置相关

再来解释下php-fpm的概念,PHP-FPM(PHP FastCGI Process Manager的简称,意思是“PHP FastCGI进程管理器”),是用于管理PHP进程池的软件,用于接收和处理来自Web服务器(例如nginx)的请求。

PHP-FPM软件会创建一个主进程(通常以操作系统中根用户的身份运行),控制何时以及如何把HTTP请求转发给一个或多个子进程处理。

这个master process就是PHP-FPM的主进程。

PHP-FPM主进程还控制着什么时候创建和销毁子进程,PHP-FPM进程池中的每一个进程存在的时间都比单个HTTP请求长。因为这章不是讲进程的,所以,进程这个概念不在此赘述,不懂的朋友可以去百度好好了解下。

PHP-FPM的配置文件在/usr/local/php/etc/里面。打开该文件。

vim /usr/local/php/etc/php-fpm.conf

可以看到很多注释掉的代码段(这里的分号是注释符)。vim快速查找单个单词的方式,在非编辑模式,按’/’这个符号,输入你想要查找的单词,然后按’n’字母跳到下一个。

全局配置相关

先来解释一下这两个配置参数。

emergency_restart_threshold:在指定的一段时间内,如果失效的PHP-FPM子进程超过这个值,PHP-FPM子进程就重启。emergency_restart_interval这个值就是指定的一段时间。这是PHP-FPM进程的基本安全保障,建议设置成如下值。

关键配置相关

各个PHP-FPM进程池都以指定的操作系统用户和用户组的身份运行,个人建议以单独的非根用户身份运行各个PHP-FPM进程池,这样你在命令行中使用top的时候便于识别每个PHP的应用的PHP-FPM进程池。

PHP-FPM进程池监听的IP地址和端口号。

拥有这个PHP-FPM进程池中子进程的系统用户(组),要把这个设置的值设置成运行PHP应用的非根用户的用户(组)名。

可以向这个PHP-FPM进程池发送请求的IP地址(一个或多个),为了安全,我是将其设为本机或者注释掉,如果你有需求,可以打开。

PHP-FPM进程池中最多能有多少进程。具体的进程数按照自己分配给php服务的内存决定,具体算法如下。

一共分配给PHP多少内存?我在阿里云申请了一台2G内存的VPS,除去分配给Nginx,MySql,Memcache的内存,我觉得给PHP分配个512MB差不多。

单个PHP进程平均消耗多少内存?PHP进程一般消耗5~20MB的内存,上传文件,图像处理等另算。

能负担起多少个PHP-FPM进程?做个简单的除法,我发现此VPS能够承担30个左右的进程,如果你觉得这样不够,可以考虑增加服务器的内存数量。

PHP-FPM启动时PHP-FPM进程池中立即可用的进程数。保持默认值即可,这么做是为了保证有2个进程,等待请求进入,不让PHP应用的头几个HTTP请求等待PHP-FPM初始化进程池中的进程。

PHP应用空闲时PHP-FPM进程池中可以存在的进程数量最小值/最大值。

PHP-FPM进程池中每个进程最多能处理的HTTP请求数量,还是各位根据需求计算出来的。

日志,各位可以配置下,用于记录处理时间超过n秒的HTTP请求信息,可以找出PHP慢的原因。

上面的n秒,一般设为5s。

保存退出,重启PHP-FPM服务。

#service php-fpm restart

可能没有日志文件,我们新建一个即可。

#mkdir -p /usr/local/php/log/
#touch /usr/local/php/log/www.log.slow

PHP调优原理

我们来分析一下每次HTTP请求时通常是如何处理PHP脚本的。

首先,nginx把http请求转发给PHP-FPM,PHP-FPM再把请求交给某个PHP子进程处理。

PHP进程找到相应的PHP脚本后,读取脚本,把PHP脚本编译成操作码(或字节码)格式,然后执行编译得到操作码,生成响应。

最后,把HTTP响应发给nginx,nginx再把响应发给HTTP客户端。

PHP调优计划

PHP解释器在php.ini文件中配置和调优,首先,我们得找到php.ini文件的所在地。

我们回想一下昨天打开的那个phpinfo.php文件,获取到php的一些信息,心细的朋友可能在里面就找到了php.ini的位置。

打开该文件。

#vim /usr/local/php/lib/php.ini

内存配置

比较科学的默认值,如果网站比较大,可以考虑到512M,如果只是一个个人网站,这个足够,或者降到64M即可。

Zend OPcache配置

字节码缓存不是PHP的新特性,很多独立的扩展可以实现缓存,例如APC、eAccelerator、XCache。从PHP 5.5.0开始,PHP内置了字节码缓存功能,名为Zend OPcache。

Zend OPcache会自动在内存中缓存预先编译好的PHP字节码,如果缓存了某个文件的字节码,就执行对应的字节码。PHP是解释型的语言,PHP解释器执行PHP脚本的时候会解析PHP脚本代码,把PHP代码编译成一系列Zend操作码,然后执行字节码。就和C汇编转机器码一样的,缓存的是可执行的字节码。

如果php.ini文件中的opcache.validate_timestamps指令的值为0,Zend OPcache就会察觉不到PHP脚本的变化,我们必须要手动去清空Zend OPcache缓存的字节码,让它发现PHP文件的变动。

下面推荐一组Zend OPcache的配置:

只是推荐,针对每个打开的属性,我下面做详解。

首先,需要打开opcache。opcache.enable=1

; Determines if Zend OPCache is enabled for the CLI version of PHP
;opcache.enable_cli=0

; The OPcache shared memory storage size.
opcache.memory_consumption=64

为操作码缓存分配的内存量。

; The amount of memory for interned strings in Mbytes.
opcache.interned_strings_buffer=16

用来存储驻留字符串的内存量,何为驻留字符串,PHP解释器在背后会找到相同字符串的多个实例,把这个字符串保存在内存中,如果再次使用相同的字符串,PHP解释器就会使用指针这么做能节省内存。

; The maximum number of keys (scripts) in the OPcache hash table.
; Only numbers between 200 and 100000 are allowed.
opcache.max_accelerated_files=4000

操作码缓存中最多能存储多少个PHP脚本。

; When disabled, you must reset the OPcache manually or restart the
; webserver for changes to the filesystem to take effect.
opcache.validate_timestamps=1

这个值设置为1时,经过一段时间后PHP会检查PHP脚本的内容是否有变化。

; How often (in seconds) to check file timestamps for changes to the shared
; memory storage allocation. ("1" means validate once per second, but only
; once per request. "0" means always validate)
opcache.revalidate_freq=0

设置PHP多久(时间是秒)检查一次PHP脚本的内容是否有变化。这么设置会在每次请求时都重新验证PHP文件,适用于线上生产环境。

; If enabled, a fast shutdown sequence is used for the accelerated code
opcache.fast_shutdown=1

这么设置能让操作码使用更快的停机过程,把对象析构和内存释放交给Zend Engine的内存管理器完成。

最长执行时间

默认最长执行时间为30秒,PHP进程运行到30秒,那还不把Web应用的访问者等死啊,所以我们不能让访问者等这么长时间,设置为5即可。如果要处理长时间的运行任务,放到单独的进程中运行即可。

会话处理

如果大家对Memcache和Redis比较熟悉,可以将这里面的session存储换成这两种内存存储,速度快,也便于以后大小的伸缩。

好的,关于PHP的配置和优化就先说到这里,如果各位需要对文件上传、缓冲设置以及其他的相关设置,可以参见php.net,在官网上可以学到更多,还有什么不明白的,可以在评论区留言评论,我会一一给大家解答。^_^

注1

zend opcache的打开还需要新增一个配置属性,在刚才的opcache区间内加入这句

zend_extension=/usr/local/php/lib/php/extensions/no-debug-non-zts-20131226/opcache.so

重启php-fpm。重启Nginx。即可在phpinfo里看到opcache的相关信息。

发表在 PHP | 标签为 | 留下评论

Nginx优化详解

出处http://9388751.blog.51cto.com/9378751/1676821

Nginx优化详解

一、一般来说nginx 配置文件中对优化比较有作用的为以下几项:

  1. worker_processes 8;

nginx 进程数,建议按照cpu 数目来指定,一般为它的倍数 (如,2个四核的cpu计为8)。

  1. worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000;

为每个进程分配cpu,上例中将8 个进程分配到8 个cpu,当然可以写多个,或者将一
个进程分配到多个cpu。

  1. worker_rlimit_nofile 65535;

这个指令是指当一个nginx 进程打开的最多文件描述符数目,理论值应该是最多打开文
件数(ulimit -n)与nginx 进程数相除,但是nginx 分配请求并不是那么均匀,所以最好与ulimit -n 的值保持一致。

现在在linux 2.6内核下开启文件打开数为65535,worker_rlimit_nofile就相应应该填写65535。

这是因为nginx调度时分配请求到进程并不是那么的均衡,所以假如填写10240,总并发量达到3-4万时就有进程可能超过10240了,这时会返回502错误。

查看linux系统文件描述符的方法:

[root@web001 ~]

# sysctl -a | grep fs.file

fs.file-max = 789972

fs.file-nr = 510 0 789972

  1. use epoll;

使用epoll 的I/O 模型

(

补充说明:

与apache相类,nginx针对不同的操作系统,有不同的事件模型

A)标准事件模型
Select、poll属于标准事件模型,如果当前系统不存在更有效的方法,nginx会选择select或poll
B)高效事件模型
Kqueue:使用于 FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 和 MacOS X. 使用双处理器的MacOS X系统使用kqueue可能会造成内核崩溃。
Epoll: 使用于Linux内核2.6版本及以后的系统。

/dev/poll:使用于 Solaris 7 11/99+, HP/UX 11.22+ (eventport), IRIX 6.5.15+ 和 Tru64 UNIX 5.1A+。

Eventport:使用于 Solaris 10. 为了防止出现内核崩溃的问题, 有必要安装安全补丁。

)

  1. worker_connections 65535;

每个进程允许的最多连接数, 理论上每台nginx 服务器的最大连接数为worker_processes*worker_connections。

  1. keepalive_timeout 60;

keepalive 超时时间。

  1. client_header_buffer_size 4k;

客户端请求头部的缓冲区大小,这个可以根据你的系统分页大小来设置,一般一个请求头的大小不会超过1k,不过由于一般系统分页都要大于1k,所以这里设置为分页大小。

分页大小可以用命令getconf PAGESIZE 取得。

[root@web001 ~]

# getconf PAGESIZE

4096

但也有client_header_buffer_size超过4k的情况,但是client_header_buffer_size该值必须设置为“系统分页大小”的整倍数。

  1. open_file_cache max=65535 inactive=60s;

这个将为打开文件指定缓存,默认是没有启用的,max 指定缓存数量,建议和打开文件数一致,inactive 是指经过多长时间文件没被请求后删除缓存。

  1. open_file_cache_valid 80s;

这个是指多长时间检查一次缓存的有效信息。

  1. open_file_cache_min_uses 1;

open_file_cache 指令中的inactive 参数时间内文件的最少使用次数,如果超过这个数字,文件描述符一直是在缓存中打开的,如上例,如果有一个文件在inactive 时间内一次没被使用,它将被移除。

二、关于内核参数的优化:

net.ipv4.tcp_max_tw_buckets = 6000

timewait 的数量,默认是180000。

net.ipv4.ip_local_port_range = 1024 65000

允许系统打开的端口范围。

net.ipv4.tcp_tw_recycle = 1

启用timewait 快速回收。

net.ipv4.tcp_tw_reuse = 1

开启重用。允许将TIME-WAIT sockets 重新用于新的TCP 连接。

net.ipv4.tcp_syncookies = 1

开启SYN Cookies,当出现SYN 等待队列溢出时,启用cookies 来处理。

net.core.somaxconn = 262144

web 应用中listen 函数的backlog 默认会给我们内核参数的net.core.somaxconn 限制到128,而nginx 定义的NGX_LISTEN_BACKLOG 默认为511,所以有必要调整这个值。

net.core.netdev_max_backlog = 262144

每个网络接口接收数据包的速率比内核处理这些包的速率快时,允许送到队列的数据包的最大数目。

net.ipv4.tcp_max_orphans = 262144

系统中最多有多少个TCP 套接字不被关联到任何一个用户文件句柄上。如果超过这个数字,孤儿连接将即刻被复位并打印出警告信息。这个限制仅仅是为了防止简单的DoS 攻击,不能过分依靠它或者人为地减小这个值,更应该增加这个值(如果增加了内存之后)。

net.ipv4.tcp_max_syn_backlog = 262144

记录的那些尚未收到客户端确认信息的连接请求的最大值。对于有128M 内存的系统而言,缺省值是1024,小内存的系统则是128。

net.ipv4.tcp_timestamps = 0

时间戳可以避免序列号的卷绕。一个1Gbps 的链路肯定会遇到以前用过的序列号。时间戳能够让内核接受这种“异常”的数据包。这里需要将其关掉。

net.ipv4.tcp_synack_retries = 1

为了打开对端的连接,内核需要发送一个SYN 并附带一个回应前面一个SYN 的ACK。也就是所谓三次握手中的第二次握手。这个设置决定了内核放弃连接之前发送SYN+ACK 包的数量。

net.ipv4.tcp_syn_retries = 1

在内核放弃建立连接之前发送SYN 包的数量。

net.ipv4.tcp_fin_timeout = 1

如 果套接字由本端要求关闭,这个参数决定了它保持在FIN-WAIT-2 状态的时间。对端可以出错并永远不关闭连接,甚至意外当机。缺省值是60 秒。2.2 内核的通常值是180 秒,3你可以按这个设置,但要记住的是,即使你的机器是一个轻载的WEB 服务器,也有因为大量的死套接字而内存溢出的风险,FIN- WAIT-2 的危险性比FIN-WAIT-1 要小,因为它最多只能吃掉1.5K 内存,但是它们的生存期长些。

net.ipv4.tcp_keepalive_time = 30

当keepalive 起用的时候,TCP 发送keepalive 消息的频度。缺省是2 小时。

三、下面贴一个完整的内核优化设置:

vi /etc/sysctl.conf CentOS5.5中可以将所有内容清空直接替换为如下内容:

net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.ipv4.tcp_max_tw_buckets = 6000
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 262144
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_keepalive_time = 30
net.ipv4.ip_local_port_range = 1024 65000

使配置立即生效可使用如下命令:
/sbin/sysctl -p

四、下面是关于系统连接数的优化

linux 默认值 open files 和 max user processes 为 1024

ulimit -n

1024

ulimit Cu

1024

问题描述: 说明 server 只允许同时打开 1024 个文件,处理 1024 个用户进程

使用ulimit -a 可以查看当前系统的所有限制值,使用ulimit -n 可以查看当前的最大打开文件数。

新装的linux 默认只有1024 ,当作负载较大的服务器时,很容易遇到error: too many open files 。因此,需要将其改大。

解决方法:

使用 ulimit Cn 65535 可即时修改,但重启后就无效了。(注ulimit -SHn 65535 等效 ulimit -n 65535 ,-S 指soft ,-H 指hard)

有如下三种修改方式:

  1. 在/etc/rc.local 中增加一行 ulimit -SHn 65535
  2. 在/etc/profile 中增加一行 ulimit -SHn 65535
  3. 在/etc/security/limits.conf 最后增加:
  • soft nofile 65535
  • hard nofile 65535
  • soft nproc 65535
  • hard nproc 65535

具体使用哪种,在 CentOS 中使用第1 种方式无效果,使用第3 种方式有效果,而在Debian 中使用第2 种有效果

ulimit -n

65535

ulimit -u

65535

备注:ulimit 命令本身就有分软硬设置,加-H 就是硬,加-S 就是软默认显示的是软限制

soft 限制指的是当前系统生效的设置值。 hard 限制值可以被普通用户降低。但是不能增加。 soft 限制不能设置的比 hard 限制更高。 只有 root 用户才能够增加 hard 限制值。

五、下面是一个简单的nginx 配置文件:

user www www;
worker_processes 8;
worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000
01000000;
error_log /www/log/nginx_error.log crit;
pid /usr/local/nginx/nginx.pid;
worker_rlimit_nofile 204800;
events
{
use epoll;
worker_connections 204800;
}
http
{
include mime.types;
default_type application/octet-stream;
charset utf-8;
server_names_hash_bucket_size 128;
client_header_buffer_size 2k;
large_client_header_buffers 4 4k;
client_max_body_size 8m;
sendfile on;
tcp_nopush on;
keepalive_timeout 60;
fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2
keys_zone=TEST:10m
inactive=5m;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 4k;
fastcgi_buffers 8 4k;
fastcgi_busy_buffers_size 8k;
fastcgi_temp_file_write_size 8k;
fastcgi_cache TEST;
fastcgi_cache_valid 200 302 1h;
fastcgi_cache_valid 301 1d;
fastcgi_cache_valid any 1m;
fastcgi_cache_min_uses 1;
fastcgi_cache_use_stale error timeout invalid_header http_500;
open_file_cache max=204800 inactive=20s;
open_file_cache_min_uses 1;
open_file_cache_valid 30s;
tcp_nodelay on;
gzip on;
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types text/plain application/x-javascript text/css application/xml;
gzip_vary on;
server
{
listen 8080;
server_name backup.aiju.com;
index index.php index.htm;
root /www/html/;
location /status
{
stub_status on;
}
location ~ ./.(php|php5)?$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fcgi.conf; } location ~ ./.(gif|jpg|jpeg|png|bmp|swf|js|css)$
{
expires 30d;
}
log_format access ‘$remote_addr — $remote_user [$time_local] “$request” ‘
‘$status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” $http_x_forwarded_for’;
access_log /www/log/access.log access;
}
}

六、关于FastCGI 的几个指令:

fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=TEST:10minactive=5m;

这个指令为FastCGI 缓存指定一个路径,目录结构等级,关键字区域存储时间和非活动删除时间。

fastcgi_connect_timeout 300;

指定连接到后端FastCGI 的超时时间。

fastcgi_send_timeout 300;

向FastCGI 传送请求的超时时间,这个值是指已经完成两次握手后向FastCGI 传送请求的超时时间。

fastcgi_read_timeout 300;

接收FastCGI 应答的超时时间,这个值是指已经完成两次握手后接收FastCGI 应答的超时时间。

fastcgi_buffer_size 4k;

指定读取FastCGI 应答第一部分需要用多大的缓冲区,一般第一部分应答不会超过1k,由于页面大小为4k,所以这里设置为4k。

fastcgi_buffers 8 4k;

指定本地需要用多少和多大的缓冲区来缓冲FastCGI 的应答。

fastcgi_busy_buffers_size 8k;

这个指令我也不知道是做什么用,只知道默认值是fastcgi_buffers 的两倍。

fastcgi_temp_file_write_size 8k;

在写入fastcgi_temp_path 时将用多大的数据块,默认值是fastcgi_buffers 的两倍。

fastcgi_cache TEST

开启FastCGI 缓存并且为其制定一个名称。个人感觉开启缓存非常有用,可以有效降低CPU 负载,并且防止502 错误。

fastcgi_cache_valid 200 302 1h;
fastcgi_cache_valid 301 1d;
fastcgi_cache_valid any 1m;

为指定的应答代码指定缓存时间,如上例中将200,302 应答缓存一小时,301 应答缓存1 天,其他为1 分钟。

fastcgi_cache_min_uses 1;

缓存在fastcgi_cache_path 指令inactive 参数值时间内的最少使用次数,如上例,如果在5 分钟内某文件1 次也没有被使用,那么这个文件将被移除。

fastcgi_cache_use_stale error timeout invalid_header http_500;

不知道这个参数的作用,猜想应该是让nginx 知道哪些类型的缓存是没用的。以上为nginx 中FastCGI 相关参数,另外,FastCGI 自身也有一些配置需要进行优化,如果你使用php-fpm 来管理FastCGI,可以修改配置文件中的以下值:

60

同时处理的并发请求数,即它将开启最多60 个子线程来处理并发连接。

102400

最多打开文件数。

204800

每个进程在重置之前能够执行的最多请求数。

出处http://9388751.blog.51cto.com/9378751/1676821

发表在 Nginx | 标签为 | 留下评论

Win10 新建文件夹或者删除文件夹不自动刷新的问题

最近用win10不知道是什么原因,新建、删除、重命名等各种常规操作,完了之后,系统不会自动刷新,要手动刷新一下才能看到效果,非常不方便,所以搜了一下,发现这个方法可以:

修改注册表(亲测有效):
    Win+R,调出运行窗口;
    输入 regedit,回车,打开注册表编辑器;
    点击 HKEY_LOCAL_MACHINE—SYSTEM—CurrentControlSet—Control—Update,
    点击 Update 编辑 UpdateMode 的值为0(16进制);
    如果没有 Update,则在 Control下新建—项,命名为 Update。
    在 Update 点击右键新建—DWORD(32位)重命名为 UpdateMode,把值修改为0;
    关闭注册表,重启系统。
   【UpdateMode 是设置是否自动刷新窗口显示,其值为1表示否,为0表示是。】


发表在 Win10 | 标签为 | 留下评论