0%

kubeadm部署single control-plane k8s集群

kubeadm介绍

Kubeadm is a tool built to provide kubeadm init and kubeadm join as best-practice “fast paths” for creating Kubernetes clusters.

  • kubeadm init 创建一个 Master节点
  • kubeadm join 将一个 Node 节点加入到当前集群中

Before you begin

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  • One or more machines running a deb/rpm-compatible OS, for example Ubuntu or CentOS
  • 2 GB or more of RAM per machine. Any less leaves little room for your apps.
  • 2 CPUs or more on the control-plane node
  • Full network connectivity among all machines in the cluster. A public or private network is fine.
  • Swap off

实际的机器参数如下:

节点主机名 IP os 类型
k8snode01 10.8.141.79 K8s node节点1 centos 7.6
k8snode02 10.8.177.123 K8s node节点2 centos 7.6
k8snode03 10.8.177.192 K8s node节点3 centos 7.6
k8smaster01 10.8.112.141 K8s master节点1 centos 7.6

准备环境

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
1 关闭swap:
# swapoff -a # 临时
# vim /etc/fstab # 永久

2 set hostname
# Add host domain name.
cat >> /etc/hosts << EOF
10.8.112.141 k8smaster01
10.8.141.79 k8snode01
10.8.177.123 k8snode02
10.8.177.192 k8snode03
EOF

3 互配信任
ssh-keygen -f ~/.ssh/id_rsa -N ''
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8smaster01
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8snode01
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8snode02
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8snode03

所有节点安装Docker/kubeadm/kubelet

安装docker

参考Container runtimes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#/bin/bash

# Install Docker CE
## Set up the repository
### Install required packages.
yum install yum-utils device-mapper-persistent-data lvm2

### Add Docker repository.
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

## Install Docker CE.
yum update && yum install docker-ce-18.06.2.ce

## Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl restart docker
systemctl enable docker.service

Installing kubeadm

参考Installing kubeadm

这里安装指定版本1.16.3版本的kubeadm / kubelet / kubectl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#/bin/bash

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubeadm-1.16.3-0.x86_64 kubelet-1.16.3-0.x86_64 kubectl-1.16.3-0.x86_64 --disableexcludes=kubernetes

systemctl enable --now kubelet

Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.

1
2
3
4
5
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

提示:所有Master+Worker节点均需要如上操作。

Initializing your control-plane node

参考Creating a single control-plane cluster with kubeadm

1
2
3
4
5
6
7
#/bin/bash

kubeadm init \
--kubernetes-version v1.16.3 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.1.0.0/16 \
--apiserver-advertise-address=0.0.0.0

各个参数的意义,参考kubeadm init

--kubernetes-version 正在使用的Kubernetes程序组件的版本号,需要与kubelet的版本号相同。
--pod-network-cidr pod网络的IP地址范围,为CIDR格式;使用flannel网络插件时,默认地址为10.244.0.0/16。
--service-cidr Default: "10.96.0.0/12" Use alternative range of IP address for service VIPs。
--apiserver-advertise-address The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.

以下图中是init执行的各个步骤:

最后返回结果如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.8.112.141:6443 --token c3gynf.fvcwcwlla88jfyvc \
--discovery-token-ca-cert-hash sha256:c23a11863be652316092e4cda59d0c56267506b4636da0e747818fd864c133f6

master上执行上面的步骤:

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

提示:在Master需要如上操作。

部署网络插件

这里采用的是flannel网络插件,kube-flannel.yml文件可以从这里获取。https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml。然后执行kubectl apply,部署网络插件。

1
2
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Once a pod network has been installed, you can confirm that it is working by checking that the CoreDNS pod is Running in the output of kubectl get pods –all-namespaces.
And once the CoreDNS pod is up and running, you can continue by joining your nodes.

部署成功后的结果

添加Node至集群中

参考kubeadm-join

提示:在Node节点操作。

在kubeadm init的返回结果中,返回了kubeadm join所需的参数,包括token和discovery-token-ca-cert-hash。

join后返回的结果:

如果忘记了token,可以通过“kubeadm token list”获取

1
2
3
[root@10-8-112-141 ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
c3gynf.fvcwcwlla88jfyvc 2h 2019-12-11T13:50:15+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

CA公钥的哈希值,可以通过以下命令获取:

1
2
[root@10-8-112-141 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
c23a11863be652316092e4cda59d0c56267506b4636da0e747818fd864c133f6

tocken默认在24 hours后会过期。可以重新生成:

1
kubeadm token create --print-join-command

从集群中删除Node(Tear down)

To undo what kubeadm did, you should first drain the node and make sure that the node is empty before shutting it down.

Talking to the control-plane node with the appropriate credentials, run:

1
2
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

Then, on the node being removed, reset all kubeadm installed state:

1
kubeadm reset

The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually:

1
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If you want to reset the IPVS tables, you must run the following command:

1
ipvsadm -C

获取集群状态信息

1
2
3
4
5
[root@10-8-112-141 ~]# kubectl cluster-info
Kubernetes master is running at https://10.8.112.141:6443
KubeDNS is running at https://10.8.112.141:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

kubectl get cs返回AGE unknown,原因未知;临时方案是用kubectl get cs -o yaml代替。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
[root@10-8-112-141 ~]# kubectl get cs
NAME AGE
controller-manager <unknown>
scheduler <unknown>
etcd-0 <unknown>
[root@10-8-112-141 ~]# kubectl get cs -o yaml
apiVersion: v1
items:
- apiVersion: v1
conditions:
- message: ok
status: "True"
type: Healthy
kind: ComponentStatus
metadata:
creationTimestamp: null
name: scheduler
selfLink: /api/v1/componentstatuses/scheduler
- apiVersion: v1
conditions:
- message: ok
status: "True"
type: Healthy
kind: ComponentStatus
metadata:
creationTimestamp: null
name: controller-manager
selfLink: /api/v1/componentstatuses/controller-manager
- apiVersion: v1
conditions:
- message: '{"health":"true"}'
status: "True"
type: Healthy
kind: ComponentStatus
metadata:
creationTimestamp: null
name: etcd-0
selfLink: /api/v1/componentstatuses/etcd-0
kind: List
metadata:
resourceVersion: ""
selfLink: ""

测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

1
2
3
4
5
6
7
8
9
# kubectl create deployment nginx --image=nginx
# kubectl expose deployment nginx --port=80 --type=NodePort
[root@10-8-112-141 ~]# kubectl get pod,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-86c57db685-zbbnr 1/1 Running 0 12h 10.244.4.2 10-8-177-192 <none> <none>

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 22h <none>
service/nginx NodePort 10.1.41.49 <none> 80:31167/TCP 12h app=nginx

访问地址:http://ClusterIP:Port, 如curl 10.1.41.49:80

或者 http://PodIP:Port, 如curl 10.244.4.2:80

或者 http://NodeIP:Port, curl 10.8.177.192:31167

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@10-8-112-141 ~]# curl 10.1.41.49:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

也可以进入部署的nginx

1
kubectl exec -it nginx-86c57db685-zbbnr sh

安装Dashboard

参考Web UI (Dashboard)

部署Dashboard

1
[root@10-8-112-141 k8s]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml

部署完毕后, 执行kubectl get pods –all-namespaces查看pods状态

1
2
3
[root@10-8-112-141 k8s]# kubectl get pods --all-namespaces | grep dashboard
kubernetes-dashboard dashboard-metrics-scraper-76585494d8-fnwp4 1/1 Running 0 16s
kubernetes-dashboard kubernetes-dashboard-b65488c4-l9kgm 1/1 Running 0 16s

未完待续。。。

参考链接

  1. Overview of kubeadm
  2. Creating a single control-plane cluster with kubeadm
  3. kubeadm init
  4. kubeadm-join
  5. Web UI (Dashboard)
  6. kubeadm部署k8s集群
  7. 附012.Kubeadm部署高可用Kubernetes