代码之家  ›  专栏  ›  技术社区  ›  James Healy

为什么kubernetes(GKE)上的contour设置会导致2个功能正常的外部IP?

  •  2
  • James Healy  · 技术社区  · 6 年前

    我一直在试验 contour 作为测试GKE kubernetes集群上的替代入口控制器。

    遵循轮廓线 deployment docs 经过一些修改,我已经有了一个服务于测试HTTP响应的工作设置。

    首先,我创建了一个“helloworld”pod,它提供http响应,通过节点端口服务和入口公开:

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
        name: helloworld
    spec:
      replicas: 4
      template:
        metadata:
          labels:
            app: helloworld
        spec:
          containers:
            - name: "helloworld-http"
              image: "nginxdemos/hello:plain-text"
              imagePullPolicy: Always
              resources:
                requests:
                  cpu: 250m
                  memory: 256Mi
          affinity:
            podAntiAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
              - weight: 100
                podAffinityTerm:
                  labelSelector:
                    matchExpressions:
                    - key: app
                      operator: In
                      values:
                      - helloworld
                  topologyKey: "kubernetes.io/hostname"
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: helloworld-svc
    spec:
      ports:
      - port: 80
        protocol: TCP
        targetPort: 80
      selector:
        app: helloworld
      sessionAffinity: None
      type: NodePort
    ---
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: helloworld-ingress
    spec:
      backend:
        serviceName: helloworld-svc
        servicePort: 80
    

    然后,我为创建了一个部署 contour 直接从他们的文档中复制:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: heptio-contour
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: contour
      namespace: heptio-contour
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      labels:
        app: contour
      name: contour
      namespace: heptio-contour
    spec:
      selector:
        matchLabels:
          app: contour
      replicas: 2
      template:
        metadata:
          labels:
            app: contour
          annotations:
            prometheus.io/scrape: "true"
            prometheus.io/port: "9001"
            prometheus.io/path: "/stats"
            prometheus.io/format: "prometheus"
        spec:
          containers:
          - image: docker.io/envoyproxy/envoy-alpine:v1.6.0
            name: envoy
            ports:
            - containerPort: 8080
              name: http
            - containerPort: 8443
              name: https
            command: ["envoy"]
            args: ["-c", "/config/contour.yaml", "--service-cluster", "cluster0", "--service-node", "node0", "-l", "info", "--v2-config-only"]
            volumeMounts:
            - name: contour-config
              mountPath: /config
          - image: gcr.io/heptio-images/contour:master
            imagePullPolicy: Always
            name: contour
            command: ["contour"]
            args: ["serve", "--incluster"]
          initContainers:
          - image: gcr.io/heptio-images/contour:master
            imagePullPolicy: Always
            name: envoy-initconfig
            command: ["contour"]
            args: ["bootstrap", "/config/contour.yaml"]
            volumeMounts:
            - name: contour-config
              mountPath: /config
          volumes:
          - name: contour-config
            emptyDir: {}
          dnsPolicy: ClusterFirst
          serviceAccountName: contour
          terminationGracePeriodSeconds: 30
          affinity:
            podAntiAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
              - weight: 100
                podAffinityTerm:
                  labelSelector:
                    matchLabels:
                      app: contour
                  topologyKey: kubernetes.io/hostname
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: contour
      namespace: heptio-contour
    spec:
     ports:
     - port: 80
       name: http
       protocol: TCP
       targetPort: 8080
     - port: 443
       name: https
       protocol: TCP
       targetPort: 8443
     selector:
       app: contour
     type: LoadBalancer
    ---
    

    默认名称空间和heptio contour名称空间现在如下所示:

    $ kubectl get pods,svc,ingress -n default
    NAME                              READY     STATUS    RESTARTS   AGE
    pod/helloworld-7ddc8c6655-6vgdw   1/1       Running   0          6h
    pod/helloworld-7ddc8c6655-92j7x   1/1       Running   0          6h
    pod/helloworld-7ddc8c6655-mlvmc   1/1       Running   0          6h
    pod/helloworld-7ddc8c6655-w5g7f   1/1       Running   0          6h
    
    NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
    service/helloworld-svc   NodePort    10.59.240.105   <none>        80:31481/TCP   34m
    service/kubernetes       ClusterIP   10.59.240.1     <none>        443/TCP        7h
    
    NAME                                    HOSTS     ADDRESS         PORTS     AGE
    ingress.extensions/helloworld-ingress   *         y.y.y.y   80        34m
    
    $ kubectl get pods,svc,ingress -n heptio-contour
    NAME                          READY     STATUS    RESTARTS   AGE
    pod/contour-9d758b697-kwk85   2/2       Running   0          34m
    pod/contour-9d758b697-mbh47   2/2       Running   0          34m
    
    NAME              TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
    service/contour   LoadBalancer   10.59.250.54   x.x.x.x   80:30882/TCP,443:32746/TCP   34m
    

    有2个可公开路由的IP地址:

    • x、 x.x.x-转发到contour吊舱的GCE TCP负载平衡器
    • y、 y.y.y-通过helloworld入口转发到helloworld播客的GCE HTTP负载平衡器

    A. curl 在两个公共IP上,从helloworld POD返回有效的HTTP响应。

    # the TCP load balancer
    $ curl -v x.x.x.x
    * Rebuilt URL to: x.x.x.x/  
    *   Trying x.x.x.x...
    * TCP_NODELAY set
    * Connected to x.x.x.x (x.x.x.x) port 80 (#0)
    > GET / HTTP/1.1
    > Host: x.x.x.x
    > User-Agent: curl/7.58.0
    > Accept: */*
    >
    < HTTP/1.1 200 OK
    < server: envoy
    < date: Mon, 07 May 2018 14:14:39 GMT
    < content-type: text/plain
    < content-length: 155
    < expires: Mon, 07 May 2018 14:14:38 GMT
    < cache-control: no-cache
    < x-envoy-upstream-service-time: 1
    <
    Server address: 10.56.4.6:80
    Server name: helloworld-7ddc8c6655-w5g7f
    Date: 07/May/2018:14:14:39 +0000
    URI: /
    Request ID: ec3aa70e4155c396e7051dc972081c6a
    
    # the HTTP load balancer
    $ curl http://y.y.y.y 
    * Rebuilt URL to: y.y.y.y/
    *   Trying y.y.y.y...
    * TCP_NODELAY set
    * Connected to y.y.y.y (y.y.y.y) port 80 (#0)
    > GET / HTTP/1.1
    > Host: y.y.y.y
    > User-Agent: curl/7.58.0
    > Accept: */*
    > 
    < HTTP/1.1 200 OK
    < Server: nginx/1.13.8
    < Date: Mon, 07 May 2018 14:14:24 GMT
    < Content-Type: text/plain
    < Content-Length: 155
    < Expires: Mon, 07 May 2018 14:14:23 GMT
    < Cache-Control: no-cache
    < Via: 1.1 google
    < 
    Server address: 10.56.2.8:80
    Server name: helloworld-7ddc8c6655-mlvmc
    Date: 07/May/2018:14:14:24 +0000
    URI: /
    Request ID: 41b1151f083eaf30368cf340cfbb92fc
    

    我有两个公共IP是故意的吗?我应该为客户使用哪一种?我可以根据自己的偏好在TCP和HTTP负载平衡器之间进行选择吗?

    1 回复  |  直到 6 年前
        1
  •  1
  •   Maciek Sawicki    6 年前

    可能您已经配置了GLBC入口( https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller )

    您可以尝试使用以下入口定义吗?

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      annotations:
        kubernetes.io/ingress.class: "contour"
      name: helloworld-ingress
    spec:
      backend:
        serviceName: helloworld-svc
        servicePort: 80
    

    如果您想确保您的流量通过contour,您应该使用 x.x.x.x ip。