当前位置: 首页>数据库>正文

k8s 挂载postgresql k8s 挂载windows共享目录

k8s 挂载postgresql k8s 挂载windows共享目录,k8s 挂载postgresql k8s 挂载windows共享目录_centos,第1张

经过上文了解,我们把共享存储的理念了解清楚了,那么咱们就实操一把,这里我们的底层存储服务选择了GlusterFS,看看如何操作。

k8s 挂载postgresql k8s 挂载windows共享目录,k8s 挂载postgresql k8s 挂载windows共享目录_kubernetes_02,第2张

3、glusterfs环境准备

要求:

1、GlusterFS需要三个节点(我这里只有两个节点,就配置两个了,能跑起来,但是后面会有问题,本文会执行不下去,必须配置三个节点以上,我这里只做演示来用,如果你的系统资源足够,但是node节点不够,可以扩增node节点数哈,前面的文章有写到);

2、每个节点上都要有一块裸磁盘;

3、三个节点都需要在k8s集群中。

我这里没有多余的空服务器了,直接在我们一直使用的两个节点上,每个节点再添加一块1G的裸磁盘吧,添加完毕之后状态如下;

[root@node2 ~]# fdisk -l

磁盘 /dev/sda:21.5 GB, 21474836480 字节,41943040 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节
磁盘标签类型:dos
磁盘标识符:0x000d1bb2

   设备 Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200    41943039    19921920   8e  Linux LVM

磁盘 /dev/sdc:1073 MB, 1073741824 字节,2097152 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节


磁盘 /dev/sdb:21.5 GB, 21474836480 字节,41943040 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节
磁盘标签类型:dos
磁盘标识符:0x8207355b

   设备 Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    41943039    20970496   83  Linux

磁盘 /dev/mapper/centos-root:34.0 GB, 33978056704 字节,66363392 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节


磁盘 /dev/mapper/centos-swap:2147 MB, 2147483648 字节,4194304 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节

[root@node2 ~]#

可以看到有一块/dev/sdc是一个空磁盘,那么我们就将针对这个空磁盘来操作。

然后我们登录gluster官网来看看,还需要什么东西呢?如下图:

k8s 挂载postgresql k8s 挂载windows共享目录,k8s 挂载postgresql k8s 挂载windows共享目录_centos_03,第3张

看起来还需要一个heketi,好了,我们准备开始吧。

4、glusterfs安装

首先我们需要在node节点上,安装glusterfs的客户端,对应着我这里就是node2和node3,下面的这个命令,node2和node3都需要执行;

[root@node2 ~]# yum -y install glusterfs glusterfs-fuse

然后我们再看apiserver是否支持,主要看一个参数,如下:

[root@node1 ~]# ps -ef | grep apiserver | grep allow-privileged
root        777      1  6 09:32 ?        00:04:06 /usr/local/bin/kube-apiserver --advertise-address=192.168.112.130 --allow-privileged=true --apiserver-count=2 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/audit.log --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --client-ca-file=/etc/kubernetes/ssl/ca.pem --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --etcd-servers=https://192.168.112.130:2379,https://192.168.112.131:2379,https://192.168.112.132:2379 --event-ttl=1h --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem --kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem --service-account-issuer=api --service-account-key-file=/etc/kubernetes/ssl/service-account.pem --service-account-signing-key-file=/etc/kubernetes/ssl/service-account-key.pem --api-audiences=api,vault,factors --service-cluster-ip-range=10.233.0.0/16 --service-node-port-range=30000-32767 --proxy-client-cert-file=/etc/kubernetes/ssl/proxy-client.pem --proxy-client-key-file=/etc/kubernetes/ssl/proxy-client-key.pem --runtime-config=api/all=true --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem --requestheader-allowed-names=aggregator --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --v=1 --feature-gates=RemoveSelfLink=false
[root@node1 ~]#

spiserver检查完毕之后,再看下kubelet是否支持,如果不支持的话需要添加下,并重启kubelet服务;

[root@node2 ~]# cat /etc/systemd/system/kubelet.service | grep allow-privileged
  --allow-privileged=true \
[root@node2 ~]#

首先我们需要准备一个daemonset,这个deamonset就是为咱们提供了一个GlusterFS的服务端;

[root@node1 ~]# cd namespace/
[root@node1 namespace]# mkdir glusterfs
[root@node1 namespace]# cd glusterfs/
[root@node1 glusterfs]# 
[root@node1 glusterfs]# vim glusterfs-daemonset.yaml 
---
kind: DaemonSet
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  name: glusterfs
  labels:
    glusterfs: daemonset
  annotations:
    description: GlusterFS DaemonSet
    tags: glusterfs
spec:
  selector:
    matchLabels:
      glusterfs: pod
      glusterfs-node: pod
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs: pod
        glusterfs-node: pod
    spec:
      nodeSelector:
        storagenode: glusterfs
      hostNetwork: true
     s:
      - image: gluster/gluster-centos:latest
        imagePullPolicy: IfNotPresent
        name: glusterfs
        env:
        # alternative for /dev volumeMount to enable access to *all* devices
        - name: HOST_DEV_DIR
          value: "/mnt/host-dev"
        # set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the
        # readiness/liveness probe validate gluster-blockd as well
        - name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE
          value: "1"
        - name: GB_GLFS_LRU_COUNT
          value: "15"
        - name: TCMU_LOGDIR
          value: "/var/log/glusterfs/gluster-block"
        resources:
          requests:
            memory: 100Mi
            cpu: 100m
        volumeMounts:
        - name: glusterfs-heketi
          mountPath: "/var/lib/heketi"
        - name: glusterfs-run
          mountPath: "/run"
        - name: glusterfs-lvm
          mountPath: "/run/lvm"
        - name: glusterfs-etc
          mountPath: "/etc/glusterfs"
        - name: glusterfs-logs
          mountPath: "/var/log/glusterfs"
        - name: glusterfs-config
          mountPath: "/var/lib/glusterd"
        - name: glusterfs-host-dev
          mountPath: "/mnt/host-dev"
        - name: glusterfs-misc
          mountPath: "/var/lib/misc/glusterfsd"
        - name: glusterfs-block-sys-class
          mountPath: "/sys/class"
        - name: glusterfs-block-sys-module
          mountPath: "/sys/module"
        - name: glusterfs-cgroup
          mountPath: "/sys/fs/cgroup"
          readOnly: true
        - name: glusterfs-ssl
          mountPath: "/etc/ssl"
          readOnly: true
        - name: kernel-modules
          mountPath: "/usr/lib/modules"
          readOnly: true
        securityContext:
          capabilities: {}
          privileged: true
        readinessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 40
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi"
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 50
        livenessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 40
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh liveness; else systemctl status glusterd.service; fi"
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 50
      volumes:
      - name: glusterfs-heketi
        hostPath:
          path: "/var/lib/heketi"
      - name: glusterfs-run
      - name: glusterfs-lvm
        hostPath:
          path: "/run/lvm"
      - name: glusterfs-etc
        hostPath:
          path: "/etc/glusterfs"
      - name: glusterfs-logs
        hostPath:
          path: "/var/log/glusterfs"
      - name: glusterfs-config
        hostPath:
          path: "/var/lib/glusterd"
      - name: glusterfs-host-dev
        hostPath:
          path: "/dev"
      - name: glusterfs-misc
        hostPath:
          path: "/var/lib/misc/glusterfsd"
      - name: glusterfs-block-sys-class
        hostPath:
          path: "/sys/class"
      - name: glusterfs-block-sys-module
        hostPath:
          path: "/sys/module"
      - name: glusterfs-cgroup
        hostPath:
          path: "/sys/fs/cgroup"
      - name: glusterfs-ssl
        hostPath:
          path: "/etc/ssl"
      - name: kernel-modules
        hostPath:
          path: "/usr/lib/modules"
[root@node1 glusterfs]#

可以看到上面的daemonset中有一个nodeSelector:,如下:

spec:
      nodeSelector:
        storagenode: glusterfs

所以我们需要给有需要的节点,打上这个标签,我这里就是在node2和node3上配置;

[root@node1 glusterfs]# kubectl get node
NAME    STATUS     ROLES    AGE   VERSION
node2   NotReady   <none>   36d   v1.20.2
node3   NotReady   <none>   36d   v1.20.2
[root@node1 glusterfs]# kubectl label node node2 storagenode=glusterfs
node/node2 labeled
[root@node1 glusterfs]# kubectl label node node3 storagenode=glusterfs
node/node3 labeled
[root@node1 glusterfs]#

然后我们执行下daemonset;

[root@node1 glusterfs]# kubectl apply -f glusterfs-daemonset.yaml 
daemonset.apps/glusterfs created
[root@node1 glusterfs]# kubectl get pod -o wide
NAME              READY   STATUS    RESTARTS   AGE    IP                NODE    NOMINATED NODE   READINESS GATES
glusterfs-ld4vr   1/1     Running   0          155m   192.168.112.131   node2   <none>           <none>
glusterfs-mz8rt   1/1     Running   0          155m   192.168.112.132   node3   <none>           <none>
[root@node1 glusterfs]#

服务端有了之后,我们还有磁盘没有初始化是吧,磁盘初始化的动作交给了heketi,看下如何配置;

[root@node1 glusterfs]# vim heketi-security.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: heketi-clusterrolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: heketi-clusterrole
subjects:
- kind: ServiceAccount
  name: heketi-service-account
  namespace: default

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: heketi-service-account
  namespace: default

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: heketi-clusterrole
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - pods/status
  - pods/exec
  verbs:
  - get
  - list
  - watch
  - create
[root@node1 glusterfs]# 
[root@node1 glusterfs]# vim heketi-deployment.yaml 
kind: Service
apiVersion: v1
metadata:
  name: heketi
  labels: 
    glusterfs: heketi-service
    deploy-heketi: support
  annotations: 
    description: Exposes Heketi Service
spec:
  selector: 
    name: heketi
  ports:
  - name: heketi
    port: 80
    targetPort: 8080

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  "30001": default/heketi:80

---

kind: Deployment
apiVersion: apps/v1
metadata:
  name: heketi
  labels: 
    glusterfs: heketi-deployment
  annotations: 
    description: Defines how to deploy Heketi
spec:
  selector:
    matchLabels:
      name: heketi
      glusterfs: heketi-pod
  replicas: 1
  template:
    metadata:
      name: heketi
      labels: 
        name: heketi
        glusterfs: heketi-pod
    spec:
      serviceAccountName: heketi-service-account
     s:
      - image: heketi/heketi:dev
        imagePullPolicy: Always
        name: heketi
        env:
        - name: HEKETI_EXECUTOR
          value: "kubernetes"
        - name: HEKETI_DB_PATH
          value: "/var/lib/heketi/heketi.db"
        - name: HEKETI_FSTAB
          value: "/var/lib/heketi/fstab"
        - name: HEKETI_SNAPSHOT_LIMIT
          value: "14"
        - name: HEKETI_KUBE_GLUSTER_DAEMONSET
          value: "y"
        - name: HEKETI_ADMIN_KEY
          value: "yunweijia123"
        ports:
        -Port: 8080
        volumeMounts:
        - name: db
          mountPath: /var/lib/heketi
        readinessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 3
          httpGet: 
            path: /hello
            port: 8080
        livenessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 30
          httpGet: 
            path: /hello
            port: 8080
      volumes:
      - name: db
        hostPath:
          path: "/heketi-data"
[root@node1 glusterfs]#

这里我们配置了一个HEKETI_ADMIN_KEY,需要注意下,后面初始化的时候会用到;

然后执行下,让其生效;

[root@node1 glusterfs]# kubectl apply -f heketi-security.yaml 
clusterrolebinding.rbac.authorization.k8s.io/heketi-clusterrolebinding created
serviceaccount/heketi-service-account created
clusterrole.rbac.authorization.k8s.io/heketi-clusterrole created
[root@node1 glusterfs]# kubectl apply -f heketi-deployment.yaml 
service/heketi unchanged
configmap/tcp-services unchanged
deployment.apps/heketi created
[root@node1 glusterfs]# 
[root@node1 glusterfs]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP                NODE    NOMINATED NODE   READINESS GATES
glusterfs-ld4vr          1/1     Running   0          3h27m   192.168.112.131   node2   <none>           <none>
glusterfs-mz8rt          1/1     Running   0          3h27m   192.168.112.132   node3   <none>           <none>
heketi-7d7bc4758-7m6d6   1/1     Running   0          3m49s   10.200.104.47     node2   <none>           <none>
[root@node1 glusterfs]#

 


https://www.xamrdz.com/database/6c41962842.html

相关文章: