代码之家  ›  专栏  ›  技术社区  ›  horcle_buzz

Kubernetes api/仪表板问题

  •  0
  • horcle_buzz  · 技术社区  · 6 年前

    我也在serverfault上发布了这篇文章,但希望能在这里获得更多的意见/反馈:

    正在尝试使仪表板UI在 kubeadm 集群使用 kubectl proxy 用于远程访问。得到

    Error: 'dial tcp 192.168.2.3:8443: connect: connection refused'
    Trying to reach: 'https://192.168.2.3:8443/'
    

    访问时 http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ 通过远程浏览器。

    查看API日志,我发现以下错误:

    I1215 20:18:46.601151       1 log.go:172] http: TLS handshake error from 10.21.72.28:50268: remote error: tls: unknown certificate authority
    I1215 20:19:15.444580       1 log.go:172] http: TLS handshake error from 10.21.72.28:50271: remote error: tls: unknown certificate authority
    I1215 20:19:31.850501       1 log.go:172] http: TLS handshake error from 10.21.72.28:50275: remote error: tls: unknown certificate authority
    I1215 20:55:55.574729       1 log.go:172] http: TLS handshake error from 10.21.72.28:50860: remote error: tls: unknown certificate authority
    E1215 21:19:47.246642       1 watch.go:233] unable to encode watch object *v1.WatchEvent: write tcp 134.84.53.162:6443->134.84.53.163:38894: write: connection timed out (&streaming.encoder{writer:(*metrics.fancyResponseWriterDelegator)(0xc42d6fecb0), encoder:(*versioning.codec)(0xc429276990), buf:(*bytes.Buffer)(0xc42cae68c0)})
    

    另外,值得注意的是,当我第一次启动集群时,它一直在工作。然而,我遇到了网络问题,为了让CoreDNS正常工作,我不得不应用它 Coredns service do not work,but endpoint is ok the other SVCs are normal only except dns ,所以我想知道这是否会破坏代理服务?

    *编辑*

    以下是仪表板盒的输出:

    [gms@thalia0 ~]$ kubectl describe pod kubernetes-dashboard-77fd78f978-tjzxt --namespace=kube-system
    Name:               kubernetes-dashboard-77fd78f978-tjzxt
    Namespace:          kube-system
    Priority:           0
    PriorityClassName:  <none>
    Node:               thalia2.hostdoman/hostip<redacted>
    Start Time:         Sat, 15 Dec 2018 15:17:57 -0600
    Labels:             k8s-app=kubernetes-dashboard
                        pod-template-hash=77fd78f978
    Annotations:        cni.projectcalico.org/podIP: 192.168.2.3/32
    Status:             Running
    IP:                 192.168.2.3
    Controlled By:      ReplicaSet/kubernetes-dashboard-77fd78f978
    Containers:
      kubernetes-dashboard:
        Container ID:  docker://ed5ff580fb7d7b649d2bd1734e5fd80f97c80dec5c8e3b2808d33b8f92e7b472
        Image:         k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
        Image ID:      docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64@sha256:1d2e1229a918f4bc38b5a3f9f5f11302b3e71f8397b492afac7f273a0008776a
        Port:          8443/TCP
        Host Port:     0/TCP
        Args:
          --auto-generate-certificates
        State:          Running
          Started:      Sat, 15 Dec 2018 15:18:04 -0600
        Ready:          True
        Restart Count:  0
        Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
        Environment:    <none>
        Mounts:
          /certs from kubernetes-dashboard-certs (rw)
          /tmp from tmp-volume (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-mrd9k (ro)
    Conditions:
      Type              Status
      Initialized       True
      Ready             True
      ContainersReady   True
      PodScheduled      True
    Volumes:
      kubernetes-dashboard-certs:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  kubernetes-dashboard-certs
        Optional:    false
      tmp-volume:
        Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:
      kubernetes-dashboard-token-mrd9k:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  kubernetes-dashboard-token-mrd9k
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations:     node-role.kubernetes.io/master:NoSchedule
                     node.kubernetes.io/not-ready:NoExecute for 300s
                     node.kubernetes.io/unreachable:NoExecute for 300s
    Events:          <none>
    

    我检查了服务:

    [gms@thalia0 ~]$ kubectl -n kube-system get service kubernetes-dashboard
    NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
    kubernetes-dashboard   ClusterIP   10.103.93.93   <none>        443/TCP   4d23h
    

    curl http://localhost:8001/api 从主节点,我确实得到了一个有效的响应。

    我刚刚将集群升级到1.13.1,希望这个问题能够得到解决,但遗憾的是,没有。

    2 回复  |  直到 6 年前
        1
  •  2
  •   Majid Rajabi    6 年前

    当你这样做的时候 kubectl proxy ,默认端口8001只能从本地主机访问。如果ssh到安装kubernetes的机器,则必须将此端口映射到笔记本电脑或用于ssh的任何设备。

    您可以通过以下方式ssh到主节点并将8001端口映射到localbox:

    ssh -L 8001:localhost:8001 hostname@master_node_IP
    
        2
  •  2
  •   horcle_buzz    6 年前

    我将集群中的所有节点升级到1.13.1版,瞧,仪表板现在可以工作了,到目前为止,我还没有应用上面提到的CoreDNS修复程序。