JohnShen's Blog.

基于Kubeadm离线安装Kubernetes

字数统计: 2.3k阅读时长: 12 min
2021/09/23 Share

基于kubeadm离线安装K8s

k8s版本:v1.22.2

安装前置条件

  • 每台机器 RAM >= 2GB

  • 每台机器 CPU >= 2核

  • 集群中的所有机器的网络能互相连接(内网或公网皆可)

  • 节点之中不可以有重复的主机名、MAC 地址、 product_uuid

    1
    2
    3
    4
    # MAC address
    ip link
    # product_uuid
    sudo cat /sys/class/dmi/id/product_uuid
  • 禁用交换分区(下文有关闭命令)

  • 确认端口可用

    • 控制平面节点端口

      协议 方向 端口范围 作用 使用者
      TCP 入站 6443 Kubernetes API 服务器 所有组件
      TCP 入站 2379-2380 etcd 服务器客户端 API kube-apiserver, etcd
      TCP 入站 10250 Kubelet API kubelet 自身、控制平面组件
      TCP 入站 10251 kube-scheduler kube-scheduler 自身
      TCP 入站 10252 kube-controller-manager kube-controller-manager 自身
    • 工作节点端口

      协议 方向 端口范围 作用 使用者
      TCP 入站 10250 Kubelet API kubelet 自身、控制平面组件
      TCP 入站 30000-32767 NodePort 服务† 所有组件

环境配置

关闭防火墙

1
2
systemctl stop firewalld
systemctl disable firewalld

关闭selinux

1
2
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

关闭swap

1
2
3
4
#临时关闭swap分区, 重启失效;
swapoff -a
#永久关闭swap分区(在包含swap的行前加注释#)
sed -ri 's/.*swap.*/#&/' /etc/fstab

更改cgroup driver & 安装Docker

https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker

1
2
3
4
5
6
7
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

更改网卡配置

将桥接的IPv4流量传递到iptables的链

1
2
3
4
5
6
7
8
9
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
sudo sysctl --system

安装 socat conntrack

由于 Kubernetes Version ≥ 1.18 时,需要安装 socat 、conntrack 。

1
2
yum install -y socat
yum install -y conntrack-tools

安装 kubeadm

A. 包管理器安装 kubelet kubeadm kubectl

1
2
3
4
5
6
7
8
9
10
11
12
13
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet

B. 手动安装 kubelet kubeadm kubectl

1. 安装CNI plugins

官方文档的版本是v0.8.2,不过通过yum安装时确定的v0.8.7

1
2
3
mkdir -p /opt/cni/bin
wget https://github.com/containernetworking/plugins/releases/download/v0.8.7/cni-plugins-linux-amd64-v0.8.7.tgz
tar zxvf cni-plugins-linux-amd64-v0.8.7.tgz -C /opt/cni/bin

2. 安装 crictl

kubeadm/kubelet 容器运行时接口(CRI)所需。

官方文档的版本是v1.17.0,不过通过yum安装时确定的v1.13.0。

1
2
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.13.0/crictl-v1.13.0-linux-amd64.tar.gz
tar zxvf crictl-v1.13.0-linux-amd64.tar.gz -C /usr/local/bin

3. 安装 kubeadm、kubelet、kubectl 并添加 kubelet 系统服务:

版本v1.22.2

存放在/usr/bin下,所以不用和官方文档一样用sed改存放位置

1
2
3
4
5
6
7
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/v1.22.2/bin/linux/amd64/{kubeadm,kubelet,kubectl}
chmod +x {kubeadm,kubelet,kubectl}
cp kube* /usr/bin

curl -sSL "https://raw.githubusercontent.com/kubernetes/release/v0.4.0/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | tee /etc/systemd/system/kubelet.service
mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/v0.4.0/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

启动并激活 kubelet

1
systemctl enable --now kubelet

使用 kubeadm 进行安装

kubeadm init

1
kubeadm init --apiserver-advertise-address=10.219.184.190 --kubernetes-version v1.22.2   --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12
  • –apiserver-advertise-address 为master 节点 IP
  • –kubernetes-version:选择k8s版本
  • –pod-network-cidr:指明 pod 网络可以使用的 IP 地址段(该地址参考flannel插件)
  • –service-cidr:为服务的虚拟 IP 地址另外指定 IP 地址段

init args:https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/

初始化有问题时可以使用kubeadm reset还原。

执行日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
[root@ydqa5 kubelet.service.d]# kubeadm init --apiserver-advertise-address=10.219.184.190 --kubernetes-version v1.22.2   --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ydqa5] and IPs [10.96.0.1 10.219.184.190]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ydqa5] and IPs [10.219.184.190 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ydqa5] and IPs [10.219.184.190 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.505159 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ydqa5 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ydqa5 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: gghdcj.11vpqnv1oq8sbj52
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.219.184.190:6443 --token gghdcj.11vpqnv1oq8sbj52 \
--discovery-token-ca-cert-hash sha256:316e5253cf337e177afa65bf352922f03cc510e134ed786c80da2baf8a55239e

结果确认

主节点安装完成后执行:

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config # 用于kubectl
sudo chown $(id -u):$(id -g) $HOME/.kube/config # 用于修改上述配置所属用户为当前用户(非root时)

查看节点信息:

1
kubectl get nodes -o wide

查看pods:

1
2
3
4
5
6
7
8
9
[root@ydqa5 kubelet.service.d]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcd69978-6gvbv 0/1 Pending 0 15m
kube-system coredns-78fcd69978-kvw8q 0/1 Pending 0 15m
kube-system etcd-ydqa5 1/1 Running 0 15m
kube-system kube-apiserver-ydqa5 1/1 Running 0 15m
kube-system kube-controller-manager-ydqa5 1/1 Running 0 15m
kube-system kube-proxy-8vbtm 1/1 Running 0 15m
kube-system kube-scheduler-ydqa5 1/1 Running 0 15m

codedns是pending状态,原因是没有配置网络插件。

安装网络插件

1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
1
2
3
4
5
6
7
8
9
10
[root@ydqa5 john]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcd69978-6gvbv 1/1 Running 0 24m
kube-system coredns-78fcd69978-kvw8q 1/1 Running 0 24m
kube-system etcd-ydqa5 1/1 Running 0 24m
kube-system kube-apiserver-ydqa5 1/1 Running 0 24m
kube-system kube-controller-manager-ydqa5 1/1 Running 0 24m
kube-system kube-flannel-ds-cxf9k 1/1 Running 0 3m12s
kube-system kube-proxy-8vbtm 1/1 Running 0 24m
kube-system kube-scheduler-ydqa5 1/1 Running 0 24m

查看 flannel 详情

1
kubectl describe pods kube-flannel-ds-cxf9k -n kube-system

kubeadm join

在主节点使用以下命令获取join指令:

1
kubeadm token create --print-join-command

在从节点执行后,kubectl get nodes -o wide 等待其他两个节点都是ready。

查看pods:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@ydqa5 kubelet.service.d]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-78fcd69978-6gvbv 1/1 Running 0 35m 10.244.0.3 ydqa5 <none> <none>
kube-system coredns-78fcd69978-kvw8q 1/1 Running 0 35m 10.244.0.2 ydqa5 <none> <none>
kube-system etcd-ydqa5 1/1 Running 0 35m 10.219.184.190 ydqa5 <none> <none>
kube-system kube-apiserver-ydqa5 1/1 Running 0 35m 10.219.184.190 ydqa5 <none> <none>
kube-system kube-controller-manager-ydqa5 1/1 Running 0 35m 10.219.184.190 ydqa5 <none> <none>
kube-system kube-flannel-ds-599g4 1/1 Running 0 2m57s 10.219.188.160 ydqa8 <none> <none>
kube-system kube-flannel-ds-cxf9k 1/1 Running 0 14m 10.219.184.190 ydqa5 <none> <none>
kube-system kube-flannel-ds-jv82z 1/1 Running 0 3m 10.219.184.13 ydqa7 <none> <none>
kube-system kube-proxy-8vbtm 1/1 Running 0 35m 10.219.184.190 ydqa5 <none> <none>
kube-system kube-proxy-9c858 1/1 Running 0 2m57s 10.219.188.160 ydqa8 <none> <none>
kube-system kube-proxy-gc4rb 1/1 Running 0 3m 10.219.184.13 ydqa7 <none> <none>
kube-system kube-scheduler-ydqa5 1/1 Running 0 35m 10.219.184.190 ydqa5 <none> <none>
CATALOG
  1. 1. 基于kubeadm离线安装K8s
    1. 1.1. 安装前置条件
    2. 1.2. 环境配置
      1. 1.2.1. 关闭防火墙
      2. 1.2.2. 关闭selinux
      3. 1.2.3. 关闭swap
      4. 1.2.4. 更改cgroup driver & 安装Docker
      5. 1.2.5. 更改网卡配置
      6. 1.2.6. 安装 socat conntrack
    3. 1.3. 安装 kubeadm
      1. 1.3.1. A. 包管理器安装 kubelet kubeadm kubectl
      2. 1.3.2. B. 手动安装 kubelet kubeadm kubectl
        1. 1.3.2.1. 1. 安装CNI plugins
        2. 1.3.2.2. 2. 安装 crictl
        3. 1.3.2.3. 3. 安装 kubeadm、kubelet、kubectl 并添加 kubelet 系统服务:
  2. 2. 使用 kubeadm 进行安装
    1. 2.1. kubeadm init
      1. 2.1.1. 执行日志
      2. 2.1.2. 结果确认
      3. 2.1.3. 安装网络插件
    2. 2.2. kubeadm join