代码之家  ›  专栏  ›  技术社区  ›  rishi007bansod

在statefulset pod中使用本地持久卷时出错

  •  6
  • rishi007bansod  · 技术社区  · 6 年前

    我正在尝试使用中提到的本地持久卷 https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/

    Events:
      Type     Reason            Age                 From               Message
      ----     ------            ----                ----               -------
      Warning  FailedScheduling  4s (x243 over 20m)  default-scheduler  0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.
    

    以下是我创建的存储类和持久卷:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: kafka-broker
    provisioner: kubernetes.io/no-provisioner
    volumeBindingMode: WaitForFirstConsumer
    

    存储级卡夫卡-动物园管理员.yml

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: kafka-zookeeper
    provisioner: kubernetes.io/no-provisioner
    volumeBindingMode: WaitForFirstConsumer
    

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: example-local-pv-zookeeper
    spec:
      capacity:
        storage: 2Gi
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      storageClassName: kafka-zookeeper
      local:
        path: /D/kubernetes-mount-path
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - my-node
    

    光伏-卡夫卡基督教青年会

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: example-local-pv
    spec:
      capacity:
        storage: 200Gi
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      storageClassName: kafka-broker
      local:
        path: /D/kubernetes-mount-path
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - my-node
    

    下面是吊舱 50pzoo.yml公司 使用此卷:

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: pzoo
      namespace: kafka
    spec:
      selector:
        matchLabels:
          app: zookeeper
          storage: persistent
      serviceName: "pzoo"
      replicas: 1
      updateStrategy:
        type: OnDelete
      template:
        metadata:
          labels:
            app: zookeeper
            storage: persistent
          annotations:
        spec:
          terminationGracePeriodSeconds: 10
          initContainers:
          - name: init-config
            image: solsson/kafka-initutils@sha256:18bf01c2c756b550103a99b3c14f741acccea106072cd37155c6d24be4edd6e2
            command: ['/bin/bash', '/etc/kafka-configmap/init.sh']
            volumeMounts:
            - name: configmap
              mountPath: /etc/kafka-configmap
            - name: config
              mountPath: /etc/kafka
            - name: data
              mountPath: /var/lib/zookeeper/data
          containers:
          - name: zookeeper
            image: solsson/kafka:2.0.0@sha256:8bc5ccb5a63fdfb977c1e207292b72b34370d2c9fe023bdc0f8ce0d8e0da1670
            env:
            - name: KAFKA_LOG4J_OPTS
              value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
            command:
            - ./bin/zookeeper-server-start.sh
            - /etc/kafka/zookeeper.properties
            ports:
            - containerPort: 2181
              name: client
            - containerPort: 2888
              name: peer
            - containerPort: 3888
              name: leader-election
            resources:
              requests:
                cpu: 10m
                memory: 100Mi
            readinessProbe:
              exec:
                command:
                - /bin/sh
                - -c
                - '[ "imok" = "$(echo ruok | nc -w 1 -q 1 127.0.0.1 2181)" ]'
            volumeMounts:
            - name: config
              mountPath: /etc/kafka
            - name: data
              mountPath: /var/lib/zookeeper/data
          volumes:
          - name: configmap
            configMap:
              name: zookeeper-config
          - name: config
            emptyDir: {}
      volumeClaimTemplates:
      - metadata:
          name: data
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: kafka-zookeeper
          resources:
            requests:
              storage: 1Gi
    

    kubectl get events 命令输出

    [root@quagga kafka-kubernetes-testing-single-node]# kubectl get events --namespace kafka
    LAST SEEN   FIRST SEEN   COUNT     NAME                           KIND                    SUBOBJECT   TYPE      REASON                 SOURCE                        MESSAGE
    1m          1m           1         pzoo.15517ca82c7a4675          StatefulSet                         Normal    SuccessfulCreate       statefulset-controller        create Claim data-pzoo-0 Pod pzoo-0 in StatefulSet pzoo success
    1m          1m           1         pzoo.15517ca82caed9bc          StatefulSet                         Normal    SuccessfulCreate       statefulset-controller        create Pod pzoo-0 in StatefulSet pzoo successful
    13s         1m           9         data-pzoo-0.15517ca82c726833   PersistentVolumeClaim               Normal    WaitForFirstConsumer   persistentvolume-controller   waiting for first consumer to be created before binding
    9s          1m           22        pzoo-0.15517ca82cb90238        Pod                                 Warning   FailedScheduling       default-scheduler             0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.
    

    的输出 kubectl get pv 是:

    [root@quagga kafka-kubernetes-testing-single-node]# kubectl get pv
    NAME                         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS      REASON    AGE
    example-local-pv             200Gi      RWO            Retain           Available             kafka-broker                4m
    example-local-pv-zookeeper   2Gi        RWO            Retain           Available             kafka-zookeeper             4m
    
    2 回复  |  直到 6 年前
        1
  •  3
  •   rishi007bansod    6 年前

    这是个愚蠢的错误。我刚才提到 my-node pv 文件夹。将其修改为正确的节点名解决了我的问题。

        2
  •  2
  •   StefanSchubert    5 年前

    谢谢分享!我也犯了同样的错误。我想k8s文档可以更清楚地说明这一点(尽管这是相当明显的),所以这是一个复制粘贴陷阱。

    更清楚一点:如果集群有3个节点,那么需要创建三个不同的命名pv,并为“我的节点”(kubectl get nodes)提供正确的节点名。volumeClaimTemplate和PV之间的唯一引用是存储类的名称。

    我将“local-pv-node-X”作为pv名称,因此当我查看kubernetes仪表板中的pv部分时,可以直接看到这个卷所在的位置。

    你可以用“我的笔记”上的提示更新你的清单