Kubernetes创建本地PV

前情提要

  • Kubernetes的持久化存储体系里面,提到最多的是对接外部ceph服务或者NFS服务。
  • 有时候,Kubernetes集群所在的环境不一定提供对应的外部存储服务,而且为了性能的需求(低延迟、高IOPS等),外部基于网络的存储无法很好的满足需求。
  • 除此之外想把数据存放到宿主机本地硬盘的做法就只有hostPath或者emptyDir,这种做法还是不够科学。
  • 在Kubernetes社区里面,对本地持久化存储的呼声非常高,因此在v1.10版本引入了Local Persistent Volume即本地PV,然后v1.14版本正式GA。
  • 本地PV并不适用于所有的应用,因为本地PV是直接跟节点绑定在一起的,如果Pod需要使用本地PV,在Pod调度过程中,会考虑本地PV的分布情况,然后选中有本地PV的节点作为调度目标节点。因此如果本地PV所在节点宕机,则可能会出现数据丢失的情况,需要应用自己能处理节点掉线数据无法访问的情况。

说明

  • 仅记录操作过程和部署过程
  • 操作系统使用的CentOS-7.6.1810 x86_64
  • 虚拟机配置4CPU 8G内存 30G系统盘 20G数据盘A 5G数据盘B
  • Kubernetes集群版本v1.14.4
  • 这里演示两种方式管理本地PV
    • 手动管理本地PV
    • 使用社区提供的local-volume-provisioner来简化本地PV的管理

准备步骤

参考社区文档

查看硬盘

  • 20G本地硬盘/dev/sdb
  • 5G本地硬盘/dev/sdc
1
2
3
4
5
6
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 30G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 29G 0 part /
sdb 8:16 0 20G 0 disk
sdc 8:32 0 5G 0 disk

创建目录

作为provisioner发现本地PV的目录

1
mkdir -p /mnt/disks

格式化硬盘

注意,这里是一个本地PV对应一个硬盘

1
2
mkfs.ext4 /dev/sdb
mkfs.ext4 /dev/sdc

获取硬盘UUID

1
2
SDB_UUID=$(blkid -s UUID -o value /dev/sdb)
SDC_UUID=$(blkid -s UUID -o value /dev/sdc)

创建挂载目录

这里使用硬盘UUID作为挂载目录

1
2
mkdir -p /mnt/disks/$SDB_UUID
mkdir -p /mnt/disks/$SDC_UUID

挂载硬盘

1
2
mount -t ext4 /dev/sdb /mnt/disks/$SDB_UUID
mount -t ext4 /dev/sdc /mnt/disks/$SDC_UUID

输出示例

1
2
3
4
5
6
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 30G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 29G 0 part /
sdb 8:16 0 20G 0 disk /mnt/disks/f8727d20-3ef9-4f83-b865-25943bc342a6
sdc 8:32 0 5G 0 disk /mnt/disks/9e0ead5f-ca8e-4018-98ad-960979d9cb26

写入fstab

1
2
echo "UUID=${SDB_UUID} /mnt/disks/${SDB_UUID} ext4 defaults 0 2" | tee -a /etc/fstab
echo "UUID=${SDC_UUID} /mnt/disks/${SDC_UUID} ext4 defaults 0 2" | tee -a /etc/fstab

输出示例

1
2
3
4
5
6
7
8
9
10
11
#
# /etc/fstab
# Created by anaconda on Tue Jun 18 05:24:00 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=f3ca2ebf-a9fe-4cae-a20f-02ff93f2ba0c / xfs defaults 0 0
UUID=3253298b-04d2-47ca-a383-e30b0b1e2267 /boot xfs defaults 0 0
UUID=f8727d20-3ef9-4f83-b865-25943bc342a6 /mnt/disks/f8727d20-3ef9-4f83-b865-25943bc342a6 ext4 defaults 0 2
UUID=9e0ead5f-ca8e-4018-98ad-960979d9cb26 /mnt/disks/9e0ead5f-ca8e-4018-98ad-960979d9cb26 ext4 defaults 0 2

给节点打标签

1
kubectl label node k8s-node1 local-pv=present

检查一下节点标签

1
kubectl get node -l local-pv=present

输出示例

1
2
NAME        STATUS   ROLES    AGE    VERSION
k8s-node1 Ready <none> 102m v1.14.4

手动管理本地PV

创建PV

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-node1-localpv-20G-1
spec:
capacity:
storage: 20Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/f8727d20-3ef9-4f83-b865-25943bc342a6
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-node1-localpv-5G-1
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/9e0ead5f-ca8e-4018-98ad-960979d9cb26
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node1

创建StorageClass

1
2
3
4
5
6
7
8
9
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
annotations:
storageclass.kubernetes.io/is-default-class: true
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain

使用Provisioner管理本地PV

安装Helm

二进制安装

1
2
3
wget -O - https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz | tar xz linux-amd64/helm
mv linux-amd64/helm /usr/local/bin/helm
rm -rf linux-amd64

创建RBAC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
cat << EOF | kubectl apply -f -
# 创建名为tiller的ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
# 给tiller绑定cluster-admin权限
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller-cluster-rule
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
EOF

安装Helm服务端

1
2
3
helm init --tiller-image gcr.azk8s.cn/google_containers/tiller:v2.14.1 \
--service-account tiller \
--stable-repo-url http://mirror.azure.cn/kubernetes/charts/

检查部署结果

查看Pod状态

1
kubectl -n kube-system get pod -l app=helm,name=tiller

输出示例

1
2
NAME                             READY     STATUS    RESTARTS   AGE
tiller-deploy-84fc6cd5f9-nz4m7 1/1 Running 0 1m

查看Helm版本信息

1
helm version

输出示例

1
2
Client: &version.Version{SemVer:"v2.14.1", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.1", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"}

部署local-volume-provisioner

下载项目代码

1
git clone --depth=1 https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner.git

修改配置

1
vim sig-storage-local-static-provisioner/helm/provisioner/value.yaml

修改如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#
# Common options.
#
common:
rbac: true
namespace: kube-system
createNamespace: true
useAlphaAPI: false
setPVOwnerRef: true
useJobForCleaning: false
useNodeNameOnly: false
minResyncPeriod: 5m0s
configMapName: "local-provisioner-config"
podSecurityPolicy: false
#
# Configure storage classes.
#
classes:
- name: local-storage
hostDir: /mnt/disks
mountDir: /mnt/disks
volumeMode: Filesystem
fsType: ext4
blockCleanerCommand:
- "/scripts/fsclean.sh"
storageClass:
reclaimPolicy: Retain
isDefaultClass: true

#
# Configure DaemonSet for provisioner.
#
daemonset:
name: "local-volume-provisioner"
image: quay.io/external_storage/local-volume-provisioner:v2.3.2
imagePullPolicy: IfNotPresent
priorityClassName: system-node-critical
serviceAccount: local-storage-admin
nodeSelector:
local-pv: present
tolerations: []
resources: {}
#
# Configure Prometheus monitoring
#
prometheus:
operator:
enabled: false

生成YAML文件

1
2
3
helm template sig-storage-local-static-provisioner/helm/provisioner \
-f sig-storage-local-static-provisioner/helm/provisioner/value.yaml \
> local-volume-provisioner.generated.yaml

部署YAML文件

1
kubectl apply -f local-volume-provisioner.generated.yaml

查看provisioner

1
kubectl -n kube-system get pod -l app=local-volume-provisioner

输出示例

1
2
NAME                             READY   STATUS    RESTARTS   AGE
local-volume-provisioner-cj242 1/1 Running 0 40s

验证本地PV

查看PersistentVolume

1
kubectl get pv

输出示例

1
2
3
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    REASON   AGE
local-pv-40a9a025 19Gi RWO Retain Available local-storage 32s
local-pv-6e9321fd 4911Mi RWO Retain Available local-storage 32s

查看StorageClass

1
kubectl get sc

输出示例

1
2
NAME                      PROVISIONER                    AGE
local-storage (default) kubernetes.io/no-provisioner 8m44s

部署StatefulSet

1
vim localpv-sts.yaml

内容如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: local-test
spec:
serviceName: "local-service"
replicas: 1
selector:
matchLabels:
app: local-test
template:
metadata:
labels:
app: local-test
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command:
- "/bin/sh"
args:
- "-c"
- "count=0; count_file=\"/usr/test-pod/count\"; test_file=\"/usr/test-pod/test_file\"; if [ -e $count_file ]; then count=$(cat $count_file); fi; echo $((count+1)) > $count_file; while [ 1 ]; do date >> $test_file; echo \"This is $MY_POD_NAME, count=$(cat $count_file)\" >> $test_file; sleep 10; done"
volumeMounts:
- name: local-vol
mountPath: /usr/test-pod
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
securityContext:
fsGroup: 1234
volumeClaimTemplates:
- metadata:
name: local-vol
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 1Gi

查看PV

1
kubectl get pv

输出示例

1
2
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                            STORAGECLASS    REASON   AGE
local-pv-6e9321fd 4911Mi RWO Retain Bound default/local-vol-local-test-0 local-storage 14m

查看PVC

1
kubectl get pvc

输出示例

1
local-vol-local-test-0   Bound    local-pv-6e9321fd   4911Mi     RWO            local-storage   2m27s

清理现场

删除StatefulSet

1
kubectl delete -f localpv-sts.yaml

删除pvc

1
kubectl delete pvc local-vol-local-test-0

删除pv

1
kubectl delete pv local-pv-6e9321fd

清理本地目录

1
ssh root@k8s-node1 "rm -rf /mnt/disks/9e0ead5f-ca8e-4018-98ad-960979d9cb26/*"