kubeadm部署基于containerd的k8s-1.20.x集群

说明

仅记录我的部署流程,不一定满足各种需求!

如无说明,均使用root用户操作!

服务器信息

主机名IP地址角色操作系统Containerd版本k8s版本
k8s-master172.16.80.100master+nodeUbuntu 20.04.3 LTS1.5.2-0ubuntu1~20.04.3v1.20.11
k8s-node1172.16.80.101nodeUbuntu 20.04.3 LTS1.5.2-0ubuntu1~20.04.3v1.20.11
k8s-node2172.16.80.102nodeUbuntu 20.04.3 LTS1.5.2-0ubuntu1~20.04.3v1.20.11

服务器初始化

添加sysctl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
cat > /etc/sysctl.d/99-k8s.conf <<EOF
# 最大文件句柄数
fs.file-max=1048576
# 最大文件打开数
fs.nr_open=1048576
# 同一时间异步IO请求数
fs.aio-max-nr=1048576
# 内存耗尽才使用swap分区
vm.swappiness=10
# 当内存耗尽时,内核会触发OOM killer根据oom_score杀掉最耗内存的进程
vm.panic_on_oom=0
# 允许overcommit
vm.overcommit_memory=1
# 定义了进程能拥有的最多内存区域,默认65536
vm.max_map_count=262144
# 二层的网桥在转发包时也会被iptables的FORWARD规则所过滤
net.bridge.bridge-nf-call-arptables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
# 关闭严格校验数据包的反向路径,默认值1
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
# 设置 conntrack 的上限
net.netfilter.nf_conntrack_max=1048576
# 设置连接跟踪表中处于TIME_WAIT状态的超时时间
net.netfilter.nf_conntrack_tcp_timeout_timewait=30
# 设置连接跟踪表中TCP连接超时时间
net.netfilter.nf_conntrack_tcp_timeout_established=1200
# 端口最大的监听队列的长度
net.core.somaxconn=21644
# TCP阻塞控制算法BBR,Linux内核版本4.9开始内置BBR算法
#net.ipv4.tcp_congestion_control=bbr
#net.core.default_qdisc=fq
# 打开ipv4数据包转发
net.ipv4.ip_forward=1
# 允许应用程序能够绑定到不属于本地网卡的地址
net.ipv4.ip_nonlocal_bind=1
# TCP连接keepalive的持续时间,默认7200
net.ipv4.tcp_keepalive_time=600
# TCP keepalive探测包发送间隔
net.ipv4.tcp_keepalive_intvl=30
# TCP keepalive探测包重试次数
net.ipv4.tcp_keepalive_probes=10
EOF

修改limits参数

1
2
3
4
5
6
cat > /etc/security/limits.d/99-k8s.conf <<EOF
* - nproc 1048576
* - nofile 1048576
root - nproc 1048576
root - nofile 1048576
EOF

修改journal设置

1
2
3
4
5
sed -e 's,^#Compress=yes,Compress=yes,' \
-e 's,^#SystemMaxUse=,SystemMaxUse=2G,' \
-e 's,^#Seal=yes,Seal=yes,' \
-e 's,^#RateLimitBurst=1000,RateLimitBurst=5000,' \
-i.bak /etc/systemd/journald.conf

配置rsyslog

1
2
3
4
sed -r \
-e 's/^\$ModLoad imjournal/#&/' \
-e 's/^\$IMJournalStateFile/#&/' \
-i.bak /etc/rsyslog.conf

打开logrotate压缩

1
sed -e 's,^#compress,compress,' -i.bak /etc/logrotate.conf

卸载snapd

1
2
3
systemctl stop snapd
systemctl disable snapd
apt purge snapd

刷新APT缓存

1
apt update

更新系统软件

1
apt upgrade

安装常用软件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apt install -qyy \
apt-transport-https \
bash-completion \
ca-certificates \
conntrack \
curl \
dirmngr \
dstat \
git \
gnupg-agent \
gnupg2 \
htop \
iotop \
ipset \
ipvsadm \
jq \
linux-tools-common \
lrzsz \
netcat \
nethogs \
socat \
software-properties-common \
sudo \
sysstat \
tcpdump \
tree \
unzip \
vim

配置网络

  • Ubuntu 20.04 LTS 使用netplan来管理网络配置,可以使用NetworkManager或者Systemd-networkd的网络守护程序来做为内核的接口。
  • 如果再通过原来的 ifupdown 工具包继续在 /etc/network/interfaces 文件里配置管理网络接口是无效的。
  • 默认的systemd-resolve会接管/etc/resolv.conf,无法直接修改,并且会监听localhost:53端口,看着非常不爽。修改过程如下

  • 网卡配置文件路径/etc/netplan/00-installer-config.yaml,配置文件的样例在这里

1
2
3
4
5
6
7
8
9
10
11
12
13
14
network:
ethernets:
ens33:
addresses:
- 172.16.80.100/24
dhcp4: no
gateway4: 172.16.80.1
nameservers:
addresses:
- 114.114.114.114
- 8.8.8.8
search: []
renderer: networkd
version: 2
  • 使用netplan命令让配置生效
1
netplan apply

这时候会发现,/etc/resolv.conf里面的nameserver指向127.0.0.53

并且是软链接到/run/systemd/resolve/stub-resolv.conf

内容如下

1
2
nameserver 127.0.0.53
options edns0
  • 修改systemd-resolv的配置文件/etc/systemd/resolved.conf
1
2
3
4
5
6
7
8
9
[Resolve]
#DNS=
#FallbackDNS=
#Domains=
LLMNR=no
MulticastDNS=no
DNSSEC=no
Cache=yes
DNSStubListener=no
  • 重启systemd-resolv服务
1
systemctl restart systemd-resolved.service
  • 修改/etc/resolv.conf软链接指向
1
ln -svf /run/systemd/resolve/resolv.conf /etc/resolv.conf
  • 现在再看/etc/resolv.conf的内容就舒服了
1
2
nameserver 114.114.114.114
nameserver 8.8.8.8

修改时区

1
timedatectl set-timezone Asia/Shanghai

配置时间同步

  • Ubuntu 20.04 LTS 使用systemd-timesyncd实现跨网络同步系统时钟的守护服务,与NTP的复杂实现相比,这个服务简单的多,它只专注于从远程服务器查询然后同步到本地时钟。
  • 守护进程运行只需要尽可能小特权,并且会跟网络服务 networkd 挂钩,仅在网络连接可用时才工作。
  • 配置文件路径/etc/systemd/timesyncd.conf
1
sed -e 's,^#NTP=.*,NTP=cn.pool.ntp.org,' -i /etc/systemd/timesyncd.conf
  • 重启systemd-timesyncd服务
1
systemctl restart systemd-timesyncd.service

修改LANG默认值

1
2
3
localectl set-locale LANG=en_US.UTF-8
localectl set-keymap us
localectl set-x11-keymap us

配置/etc/hosts

1
2
3
4
5
127.0.0.1     localhost
::1 localhost
172.16.80.100 k8s-master
172.16.80.101 k8s-node1
172.16.80.102 k8s-node2

禁用终端欢迎消息广告

  • 关闭获取Ubuntu新闻
1
sed -e 's,^ENABLED=1,ENABLED=0,g' -i /etc/default/motd-news
  • 关闭动态motd不需要的内容
1
2
chmod -x /etc/update-motd.d/80-livepatch
chmod -x /etc/update-motd.d/10-help-text

禁用swap分区

1
2
swapoff -a
sed -r -e '/^[^#]*swap/s@^@#@' -i.bak /etc/fstab

配置内核模块

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
cat > /etc/modules-load.d/k8s.conf <<EOF
ip_vs
# 负载均衡调度算法-最少连接
ip_vs_lc
# 负载均衡调度算法-加权最少连接
ip_vs_wlc
# 负载均衡调度算法-轮询
ip_vs_rr
# 负载均衡调度算法-加权轮询
ip_vs_wrr
# 源地址散列调度算法
ip_vs_sh
# 加载nf_conntrack连接状态跟踪模块
nf_conntrack
#nf_conntrack_ipv4
#nf_conntrack_ipv6
# 加载br_netfilter模块
br_netfilter
EOF

kubernetes集群环境准备

添加APT源

使用阿里云的kubernetes源

1
2
3
4
curl -sSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat > /etc/apt/sources.list.d/kubernetes.list <<EOF
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

安装k8s软件包

1
2
3
4
5
6
apt-get update
apt-get install -y \
kubelet=1.20.11-00 \
kubeadm=1.20.11-00 \
kubectl=1.20.11-00 \
containerd=1.5.2-0ubuntu1~20.04.3

添加命令补全

  • 临时生效
1
2
3
source <(crictl completion)
source <(kubeadm completion bash)
source <(kubectl completion bash)
  • 永久生效
1
2
3
crictl completion > /etc/bash_completion.d/crictl
kubectl completion bash > /etc/bash_completion.d/kubectl
kubeadm completion bash > /etc/bash_completion.d/kubeadm

Containerd

再次声明!

containerd的版本是1.5.2-0ubuntu1~20.04.3,1.5.2之前的版本不一定能用!

修改crictl配置

1
crictl config runtime-endpoint /run/containerd/containerd.sock

生成containerd配置

1
2
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

修改containerd配置

1
vim /etc/containerd/config.toml

pause镜像地址

[plugins."io.containerd.grpc.v1.cri"]配置项

sandbox_image = "k8s.gcr.io/pause:3.5"修改为sandbox_image = "registry.aliyuncs.com/k8sxio/pause:3.5"

cgroups

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]配置项

SystemdCgroup = false修改为SystemdCgroup = true

docker.io国内镜像地址

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]底下追加两行

1
2
3
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://pqbap4ya.mirror.aliyuncs.com"]

k8s.gcr.io国内镜像地址

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]底下再追加两行,不过好像不生效

1
2
3
4
5
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://pqbap4ya.mirror.aliyuncs.com"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
endpoint = ["https://registry.aliyuncs.com/k8sxio"]

直接sed生成

1
2
3
4
5
6
7
8
mkdir -p /etc/containerd
containerd config default | \
sed -e 's,k8s.gcr.io,registry.aliyuncs.com/k8sxio,g' \
-e 's,SystemdCgroup = .*,SystemdCgroup = true,' \
-e 's/\[plugins."io.containerd.grpc.v1.cri".registry.mirrors\]/&\n \[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]\n endpoint = \["https:\/\/pqbap4ya.mirror.aliyuncs.com"]\n/' \
-e 's/\[plugins."io.containerd.grpc.v1.cri".registry.mirrors\]/&\n \[plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]\n endpoint = \["https:\/\/registry.aliyuncs.com\/k8sxio"]/' \
-e '/^\s*$/d' |
tee /etc/containerd/config.toml

配置文件全文

  • /etc/containerd/config.toml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
version = 2

[cgroup]
path = ""

[debug]
address = ""
format = ""
gid = 0
level = ""
uid = 0

[grpc]
address = "/run/containerd/containerd.sock"
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
tcp_address = ""
tcp_tls_cert = ""
tcp_tls_key = ""
uid = 0

[metrics]
address = ""
grpc_histogram = false

[plugins]

[plugins."io.containerd.gc.v1.scheduler"]
deletion_threshold = 0
mutation_threshold = 100
pause_threshold = 0.02
schedule_delay = "0s"
startup_delay = "100ms"

[plugins."io.containerd.grpc.v1.cri"]
disable_apparmor = false
disable_cgroup = false
disable_hugetlb_controller = true
disable_proc_mount = false
disable_tcp_service = true
enable_selinux = false
enable_tls_streaming = false
ignore_image_defined_volumes = false
max_concurrent_downloads = 3
max_container_log_line_size = 16384
netns_mounts_under_state_dir = false
restrict_oom_score_adj = false
sandbox_image = "registry.aliyuncs.com/k8sxio/pause:3.5"
selinux_category_range = 1024
stats_collect_period = 10
stream_idle_timeout = "4h0m0s"
stream_server_address = "127.0.0.1"
stream_server_port = "0"
systemd_cgroup = false
tolerate_missing_hugetlb_controller = true
unset_seccomp_profile = ""

[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
conf_template = ""
max_conf_num = 1

[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
disable_snapshot_annotations = true
discard_unpacked_layers = false
no_pivot = false
snapshotter = "overlayfs"

[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
base_runtime_spec = ""
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_root = ""
runtime_type = ""

[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
base_runtime_spec = ""
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_root = ""
runtime_type = "io.containerd.runc.v2"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = true

[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
base_runtime_spec = ""
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_root = ""
runtime_type = ""

[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

[plugins."io.containerd.grpc.v1.cri".image_decryption]
key_model = "node"

[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""

[plugins."io.containerd.grpc.v1.cri".registry.auths]

[plugins."io.containerd.grpc.v1.cri".registry.configs]

[plugins."io.containerd.grpc.v1.cri".registry.headers]

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://pqbap4ya.mirror.aliyuncs.com"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
endpoint = ["https://registry.aliyuncs.com/k8sxio"]

[plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""

[plugins."io.containerd.internal.v1.opt"]
path = "/opt/containerd"

[plugins."io.containerd.internal.v1.restart"]
interval = "10s"

[plugins."io.containerd.metadata.v1.bolt"]
content_sharing_policy = "shared"

[plugins."io.containerd.monitor.v1.cgroups"]
no_prometheus = false

[plugins."io.containerd.runtime.v1.linux"]
no_shim = false
runtime = "runc"
runtime_root = ""
shim = "containerd-shim"
shim_debug = false

[plugins."io.containerd.runtime.v2.task"]
platforms = ["linux/amd64"]

[plugins."io.containerd.service.v1.diff-service"]
default = ["walking"]

[plugins."io.containerd.snapshotter.v1.aufs"]
root_path = ""

[plugins."io.containerd.snapshotter.v1.btrfs"]
root_path = ""

[plugins."io.containerd.snapshotter.v1.devmapper"]
async_remove = false
base_image_size = ""
pool_name = ""
root_path = ""

[plugins."io.containerd.snapshotter.v1.native"]
root_path = ""

[plugins."io.containerd.snapshotter.v1.overlayfs"]
root_path = ""

[plugins."io.containerd.snapshotter.v1.zfs"]
root_path = ""

[proxy_plugins]

[stream_processors]

[stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar"

[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"

[ttrpc]
address = ""
gid = 0
uid = 0

重启Containerd

1
2
systemctl enable containerd.service
systemctl restart containerd.service

验证配置生效

1
2
3
crictl info | jq '.config.containerd.runtimes.runc.options.SystemdCgroup'
crictl info | jq '.config.registry.mirrors."docker.io"'
crictl info | jq '.config.sandboxImage'

kubeadm初始化

生成kubeadm配置文件

1
kubeadm config print init-defaults > kubeadm-init.yaml

修改配置文件

修改之后如下文所示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
# 配置criSocket为containerd
criSocket: /run/containerd/containerd.sock
---
apiServer:
timeoutForControlPlane: 4m0s
certSANs:
- localhost
- kubernetes
- kubernetes.default
- kubernetes.default.svc
- kubernetes.default.svc.cluster.local
# 这里的主机名和IP地址自己定义
- k8s-master
- 127.0.0.1
- 10.96.0.1
- 172.16.80.100
extraArgs:
authorization-mode: "Node,RBAC"
runtime-config: "api/all=true"
extraVolumes:
- name: "timezone-volume"
hostPath: "/usr/share/zoneinfo/Asia/Shanghai"
mountPath: "/etc/localtime"
readOnly: true
pathType: File
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
bind-address: "0.0.0.0"
cluster-signing-duration: "87600h"
extraVolumes:
- name: "timezone-volume"
hostPath: "/usr/share/zoneinfo/Asia/Shanghai"
mountPath: "/etc/localtime"
readOnly: true
pathType: File
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
extraArgs:
advertise-client-urls: "https://172.16.80.100:2379"
listen-client-urls: "https://127.0.0.1:2379,https://172.16.80.100:2379"
listen-peer-urls: "https://172.16.80.100:2380"
# 配置etcd server证书的SAN
serverCertSANs:
- k8s-master
- localhost
- ::1
- 127.0.0.1
- 172.16.80.100
# 配置etcd peer证书的SAN
peerCertSANs:
- k8s-master
- localhost
- ::1
- 127.0.0.1
- 172.16.80.100
# 这里将k8s.gcr.io替换成馆长的registry.aliyuncs.com/k8sxio
imageRepository: registry.aliyuncs.com/k8sxio
kind: ClusterConfiguration
# 指定一下Kubernetes的镜像版本
kubernetesVersion: v1.20.11
networking:
# 配置集群域名、service网络地址和pod网络地址
dnsDomain: cluster.local
# kube-flannel默认是10.244.0.0/16
# Calico默认是192.168.0.0/16
#podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler:
extraArgs:
bind-address: "0.0.0.0"
extraVolumes:
- name: "timezone-volume"
hostPath: "/usr/share/zoneinfo/Asia/Shanghai"
mountPath: "/etc/localtime"
readOnly: true
pathType: File
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
healthzBindAddress: "0.0.0.0"
kind: KubeProxyConfiguration
metricsBindAddress: "0.0.0.0"
# 指定运行为IPVS模式
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
# 这里要跟containerd一样,使用systemd作为cgroups驱动
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
failSwapOn: false
healthzBindAddress: 0.0.0.0
healthzPort: 10248
kind: KubeletConfiguration
maxOpenFiles: 1048576
# kubelet节点Pod的最大数量,默认110
maxPods: 110
resolvConf: /etc/resolv.conf
rotateCertificates: true
staticPodPath: /etc/kubernetes/manifests

初始化集群

  • 检查一下是否报错
1
kubeadm init --config=kubeadm-init.yaml --dry-run
  • 先拉取镜像
1
2
3
for img in $(kubeadm config --config=kubeadm-init.yaml images list | xargs);do
crictl pull $img
done
  • 初始化k8s集群
1
kubeadm init --config=kubeadm-init.yaml

集群初始化完成之后,会给出kubeadm join命令,用于node节点加入集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.80.100:6443 --token wyoi59.me7ajasf4ilcmpgh \
--discovery-token-ca-cert-hash sha256:6b00aebd68220fec6ccce97c5a206cd331752cbeed7ca5701cfe8f883955fdb8

配置kubeconfig

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

设置master可调度

1
kubectl taint node --all node-role.kubernetes.io/master-

添加CNI插件

只需要装一个即可,同时装多个会出问题的!

kube-flannel

要用kube-flannel的话,kubeadm-init.yaml需要声明networking.podSubnet=10.244.0.0/16

1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Calico

calico默认的podCIDR192.168.0.0/16kubeadm-init.yaml需要声明networking.podSubnet=192.168.0.0/16

  • 安装operator
1
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
  • 部署CRD
1
kubectl apply -f https://docs.projectcalico.org/manifests/custom-resources.yaml

Cilium

  • 下载
1
2
3
4
curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
tar xzfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz{,.sha256sum}
  • 安装
1
cilium install
  • 验证
1
2
cilium status --wait
cilium connectivity test

添加node节点

  • 完成服务器初始化
  • 安装Kubernetes软件包和containerd
  • 配置containerd
  • 添加到集群,这里执行初始化集群成功之后输出的kubeadm join命令
1
2
kubeadm join 172.16.80.100:6443 --token wyoi59.me7ajasf4ilcmpgh \
--discovery-token-ca-cert-hash sha256:6b00aebd68220fec6ccce97c5a206cd331752cbeed7ca5701cfe8f883955fdb8

获取节点状态

1
kubectl get node -o wide

节点状态如下

1
2
3
4
NAME         STATUS   ROLES                  AGE    VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-master Ready control-plane,master 105m v1.20.11 172.16.80.100 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.2
k8s-node1 Ready control-plane,master 65m v1.20.11 172.16.80.101 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.2
k8s-node1 Ready control-plane,master 65m v1.20.11 172.16.80.102 <none> Ubuntu 20.04.3 LTS 5.4.0-88-generic containerd://1.5.2