Kubernetes 持久化存储之GlusterFS

2021-05-12 15:27

阅读:408

标签:svc   capacity   default   tcp   poi   prot   storage   nginx   har   

GlusterFS是一个开源的分布式文件,具有强大的横向扩展能力,可支持数PB存储容量和数千客户端,通过网络互连成一个并行的网络文件系统。具有扩展性、高性能、高可用性等特点。

前提:必须要在实验环境中部署了Gluster FS集群,文中创建了名为:gv0的存储卷

1.创建endpoint,文件名为glusterfs_ep.yaml

$ vi glusterfs_ep.yaml
apiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs
  namespace: default
subsets:
# 添加GlusterFS各个集群的IP地址
- addresses:
  - ip: 10.0.0.41
  - ip: 10.0.0.42
  ports:
  # 添加GlusterFS端口号
  - port: 49152
    protocol: TCP

执行yaml

$ kubectl create -f  glusterfs_ep.yaml
endpoints/glusterfs created

// 查看创建好的endpoints
[root@k8s-master01 ~]# kubectl get ep
NAME                 ENDPOINTS                                    AGE
glusterfs            10.0.0.41:49152,10.0.0.42:49152       15s

2.为该endpoint创建svc
Endpoint是GlusterFS的集群节点,那么需要访问到这些节点,就需要创建svc

$ vi glusterfs_svc.yaml
apiVersion: v1
kind: Service
metadata:
  # 该名称必须要和endpoint里的name一致
  name: glusterfs
spec:
  ports:
  - port: 49152
    protocol: TCP
    targetPort: 49152
  sessionAffinity: None
  type: ClusterIP

执行yaml

$ kubectl create -f  glusterfs_svc.yaml
service/glusterfs created

$ kubectl get svc
NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
glusterfs            ClusterIP   10.1.104.145           49152/TCP   20s

3.为Glusterfs创建pv

$ vi glusterfs_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster
  labels:
    type: glusterfs
spec:
  capacity:
      # 指定该pv的容量
    storage: 50Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    # 指定glusterfs的endpoint名称
    endpoints: "glusterfs"
    # path名称是在glusterfs里创建的卷
    # 可登录到glusterfs集群执行"gluster volume list"命令来查看已创建的卷
    path: "gv0"
    readOnly: false

执行yaml

$ kubectl create -f  glusterfs_pv.yaml
persistentvolume/gluster created

$ kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
gluster   50Gi       RWX            Retain           Available                                   10s

4.为Glusterfs创建pvc

$ vi glusterfs_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  # 名称必须和指定的pv一致
  name: gluster
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
          # 指定该pvc使用pv的容量空间
      storage: 20Gi

执行yaml

$ kubectl  create -f glusterfs_pvc.yaml
persistentvolumeclaim/gluster created

$ kubectl get pvc
NAME      STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
gluster   Bound    gluster   50Gi       RWX                           83s

5.创建nginx pod并挂载到cluster的pvc nginx_pod.yaml

$ vim nginx-demo.yaml
---
# Pod
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: web
    env: test
spec:
  containers:
    - name: nginx
      image: nginx:1.13
      ports:
        - containerPort: 80
      volumeMounts:
        - name: data-gv0
          mountPath: /usr/share/nginx/html
  volumes:
  - name: data-gv0
    persistentVolumeClaim:
          # 绑定指定的pv
      claimName: gluster

执行yaml

$ kubectl  create -f nginx-demo.yaml
pod/nginx created

[root@k8s-master01 ~]# kubectl get pods  | grep "nginx"
nginx  1/1     Running     0          2m     10.244.1.222   k8s-node01   

在任意客户端挂载/mntglusterfs目录,然后创建一个index.html文件

$ mount -t glusterfs k8s-store01:/gv0 /mnt/
$ cd /mnt && echo "this nginx store used gluterfs cluster" >index.html

在master节点上通过curl访问pod

$ curl  10.244.1.220/index.html
this nginx store used gluterfs cluster

Kubernetes 持久化存储之GlusterFS

标签:svc   capacity   default   tcp   poi   prot   storage   nginx   har   

原文地址:https://blog.51cto.com/12643266/2457010


评论


亲,登录后才可以留言!