代码之家  ›  专栏  ›  技术社区  ›  rishi007bansod

Kubernetes Kubeadm初始化时出错

  •  0
  • rishi007bansod  · 技术社区  · 6 年前

    我正在尝试在我的裸机集群上使用 kubeadm . 但在初始化过程中 kubeadm init 我得到以下错误:

    [root@server docker]# kubeadm init
    [init] using Kubernetes version: v1.11.2
    [preflight] running pre-flight checks
            [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
            [WARNING HTTPProxy]: Connection to "https://192.111.141.4" uses proxy "http://lab:on@192.111.141.15:3122". If that is not intended, adjust your proxy settings
            [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://lab:on@192.111.141.15:3122". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
    I0827 16:33:00.426176   34482 kernel_validator.go:81] Validating kernel version
    I0827 16:33:00.426374   34482 kernel_validator.go:96] Validating kernel config
            [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.0-ce. Max validated version: 17.03
    [preflight/images] Pulling images required for setting up a Kubernetes cluster
    [preflight/images] This might take a minute or two, depending on the speed of your internet connection
    [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [preflight] Activating the kubelet service
    [certificates] Generated ca certificate and key.
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [server.test.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.111.141.4]
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [certificates] Generated sa key and public key.
    [certificates] Generated front-proxy-ca certificate and key.
    [certificates] Generated front-proxy-client certificate and key.
    [certificates] Generated etcd/ca certificate and key.
    [certificates] Generated etcd/server certificate and key.
    [certificates] etcd/server serving cert is signed for DNS names [server.test.com localhost] and IPs [127.0.0.1 ::1]
    [certificates] Generated etcd/peer certificate and key.
    [certificates] etcd/peer serving cert is signed for DNS names [server.test.com localhost] and IPs [192.111.141.4 127.0.0.1 ::1]
    [certificates] Generated etcd/healthcheck-client certificate and key.
    [certificates] Generated apiserver-etcd-client certificate and key.
    [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
    [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
    [init] this might take a minute or longer if the control plane images have to be pulled
    
                    Unfortunately, an error has occurred:
                            timed out waiting for the condition
    
                    This error is likely caused by:
                            - The kubelet is not running
                            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
                            - No internet connection is available so the kubelet cannot pull or find the following control plane images:
                                    - k8s.gcr.io/kube-apiserver-amd64:v1.11.2
                                    - k8s.gcr.io/kube-controller-manager-amd64:v1.11.2
                                    - k8s.gcr.io/kube-scheduler-amd64:v1.11.2
                                    - k8s.gcr.io/etcd-amd64:3.2.18
                                    - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
                                      are downloaded locally and cached.
    
                    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                            - 'systemctl status kubelet'
                            - 'journalctl -xeu kubelet'
    
                    Additionally, a control plane component may have crashed or exited when started by the container runtime.
                    To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
                    Here is one example how you may list all Kubernetes containers running in docker:
                            - 'docker ps -a | grep kube | grep -v pause'
                            Once you have found the failing container, you can inspect its logs with:
                            - 'docker logs CONTAINERID'
    couldn't initialize a Kubernetes cluster
    

    飞行前的图像也出现在我的系统,但我仍然得到这个错误。之后 [init] this might take a minute or longer if the control plane images have to be pulled

    2 回复  |  直到 6 年前
        1
  •  1
  •   rishi007bansod    6 年前

    错误是由于我的系统中启用了防火墙。我不得不使用命令禁用防火墙

    systemctl stop firewalld
    
        2
  •  1
  •   Rotem jackoby    4 年前

    我建议:

    kubeadm init 在此节点上-尝试运行 kubeadm reset .

    kubeadm初始化 添加最新版本或特定版本 --kubernetes-version=X.Y.Z (更改自 v1.19.2 v1.19.3 为我解决了问题)。

    3)尝试所有与Kubelet相关的操作,比如@cgrim。

    sudo firewall-cmd --zone public --list-all 打开相关的端口。

        3
  •  0
  •   cgrim    6 年前

    尝试重新启动kubelet:

    systemctl restart kubelet
    

    systemctl status kubelet
    

    检查kubelet日志:

    journalctl -xeu kubelet
    

    如果重新启动kubelet没有帮助,可以尝试重新安装kubelet,它是一个单独的包:

    • dnf reinstall kubelet 戴软呢帽
    • yum reinstall kubelet
    • apt-get purge kubelet && apt-get install kubelet 在Debian/Ubuntu上

    docker pull k8s.gcr.io/kube-apiserver-amd64:v1.11.2