centos7使用kubeadm安装部署kubernetes 1.14

2023-06-20,,

应用背景

截止目前为止,高热度的kubernetes版本已经发布至1.14,在此记录一下安装部署步骤和过程中的问题排查。

部署k8s一般两种方式:kubeadm(官方称目前已经GA,可以在生产环境使用);二进制安装(比较繁琐)。

这里暂且采用kubeadm方式部署测试。

测试环境

System Hostname IP
CentOS 7.6 k8s-master 138.138.82.14
CentOS 7.6 k8s-node1 138.138.82.15
CentOS 7.6 k8s-node2 138.138.82.16

网络插件:calico

具体步骤

1. 环境预设(在所有主机上操作)

关闭firewalld:

systemctl stop firewalld && systemctl disable firewalld

关闭SElinux:

setenforce && sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

关闭Swap:

swapoff -a && sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab

使用阿里云yum源:

wget -O /etc/yum.repos.d/CentOS7-Aliyun.repo http://mirrors.aliyun.com/repo/Centos-7.repo

更新 /etc/hosts 文件:在每一台主机的该文件中添加k8s所有节点的IP和对应主机名,否则初始化的时候回出现告警甚至错误。

2. 安装docker引擎(在所有主机上操作)

安装阿里云docker源:

wget -O /etc/yum.repos.d/docker-ce http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装docker:

yum install docker-ce -y

启动docker:

systemctl enable docker && systemctl start docker

调整docker部分参数:

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://5twf62k1.mirror.aliyuncs.com"],   // 改为阿里镜像
"exec-opts": ["native.cgroupdriver=systemd"]  // 默认cgroupfs,k8s官方推荐systemd,否则初始化出现Warning
}
EOF
systemctl daemon-reload
systemctl restart docker

检查确认docker的Cgroup Driver信息:

[root@k8s-master ~]# docker info |grep Cgroup
Cgroup Driver: systemd

3. 安装kubernetes初始化工具(在所有主机上操作)

使用阿里云的kubernetes源:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=
gpgcheck=
repo_gpgcheck=
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装工具: yum install -y kubelet kubeadm kubectl   // 此时最新版本1.14.1

启动kubelet: systemctl enable kubelet && systemctl start kubelet   // 此时启动不成功正常,后面初始化的时候会变成功

4. 预下载相关镜像(在master节点上操作)

查看集群初始化所需镜像及对应依赖版本号:

[root@k8s-master ~]# kubeadm config images list
……
k8s.gcr.io/kube-apiserver:v1.14.1
k8s.gcr.io/kube-controller-manager:v1.14.1
k8s.gcr.io/kube-scheduler:v1.14.1
k8s.gcr.io/kube-proxy:v1.14.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.
k8s.gcr.io/coredns:1.3.

因为这些重要镜像都被墙了,所以要预先单独下载好,然后才能初始化集群。

下载脚本:

#!/bin/bash

set -e

KUBE_VERSION=v1.14.1
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.
CORE_DNS_VERSION=1.3. GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION}) for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done

5. 初始化集群(在master节点上操作)

kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=192.168.0.0/

注意:初始化之后会安装网络插件,这里选择了calico,所以修改 --pod-network-cidr=192.168.0.0/

初始化输出记录样例:

[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=192.168.0.0/
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 138.138.82.14]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [138.138.82.14 127.0.0.1 ::]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [138.138.82.14 127.0.0.1 ::]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.002739 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 57iu95.6narx7y8peauts76
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 138.138.82.14: --token 57iu95.6narx7y8peauts76 \
--discovery-token-ca-cert-hash sha256:5dc8beaa3b0e6fa26b97e2cc3b8ae776d000277fd23a7f8692dc613c6e59f5e4

以上输出显示初始化成功,并给出了接下来的必要步骤和节点加入集群的命令,照着做即可。

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看已经运行的pod

[root@k8s-master ~]# kubectl get pod -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-6mgks / Pending 9m6s <none> <none> <none> <none>
coredns-fb8b8dccf-cbtlx / Pending 9m6s <none> <none> <none> <none>
etcd-k8s-master / Running 8m22s 138.138.82.14 k8s-master <none> <none>
kube-apiserver-k8s-master / Running 8m19s 138.138.82.14 k8s-master <none> <none>
kube-controller-manager-k8s-master / Running 8m30s 138.138.82.14 k8s-master <none> <none>
kube-proxy-c9xd2 / Running 9m7s 138.138.82.14 k8s-master <none> <none>
kube-scheduler-k8s-master / Running 8m6s 138.138.82.14 k8s-master <none> <none>

到这里,会发现除了coredns未ready,这是正常的,因为还没有网络插件,接下来安装calico后就变为正常running了。

6. 安装calico(在master节点上操作)

Calico官网:https://docs.projectcalico.org/v3.6/getting-started/kubernetes/

kubectl apply -f \
https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

应用官方的yaml文件之后,过一会查看所有pod已经正常running状态了,也分配出了对应IP:

[root@k8s-master ~]# kubectl get pod -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-node-r5mlj / Running 72s 138.138.82.14 k8s-master <none> <none>
coredns-fb8b8dccf-6mgks / Running 15m 192.168.0.7 k8s-master <none> <none>
coredns-fb8b8dccf-cbtlx / Running 15m 192.168.0.6 k8s-master <none> <none>
etcd-k8s-master / Running 15m 138.138.82.14 k8s-master <none> <none>
kube-apiserver-k8s-master / Running 15m 138.138.82.14 k8s-master <none> <none>
kube-controller-manager-k8s-master / Running 15m 138.138.82.14 k8s-master <none> <none>
kube-proxy-c9xd2 / Running 15m 138.138.82.14 k8s-master <none> <none>
kube-scheduler-k8s-master / Running 14m 138.138.82.14 k8s-master <none> <none>

查看节点状态

[root@k8s-master ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready master 22m v1.14.1 138.138.82.14 <none> CentOS Linux (Core) 3.10.-957.10..el7.x86_64 docker://18.9.5

至此,集群初始化和主节点都准备就绪,接下来就是加入其他工作节点至集群中。

7. 加入集群(在非master节点上操作)

先在需要加入集群的节点上下载必要镜像,下载脚本如下:

#!/bin/bash

set -e

KUBE_VERSION=v1.14.1
KUBE_PAUSE_VERSION=3.1 GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers images=(kube-proxy-amd64:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}) for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done

然后在主节点初始化输出中获取加入集群的命令,复制到工作节点执行即可:

[root@k8s-node1 ~]# kubeadm join 138.138.82.14: --token 57iu95.6narx7y8peauts76 \
> --discovery-token-ca-cert-hash sha256:5dc8beaa3b0e6fa26b97e2cc3b8ae776d000277fd23a7f8692dc613c6e59f5e4
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

8. 在master节点上查看各节点工作状态

[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 26m v1.14.1
k8s-node1 Ready <none> 84s v1.14.1
k8s-node2 Ready <none> 74s v1.14.1

至此,最简单的集群已经部署完成。

接下来,部署其他插件。

下一篇:calico客户端工具calicoctl

结束.

centos7使用kubeadm安装部署kubernetes 1.14的相关教程结束。

《centos7使用kubeadm安装部署kubernetes 1.14.doc》

下载本文的Word格式文档,以方便收藏与打印。