• Shared control plane (multi-network)
    • Prerequisites
    • Setup the multicluster mesh
      • Setup cluster 1 (primary)
      • Setup cluster 2
      • Start watching cluster 2
    • Deploy example service
      • Deploy helloworld v2 in cluster 2
      • Deploy helloworld v1 in cluster 1
      • Cross-cluster routing in action
    • Cleanup
    • 相关内容

    Shared control plane (multi-network)

    Follow this guide to configure a multicluster mesh using a sharedcontrol planewith gateways to connect network-isolated clusters.Istio’s location-aware service routing feature is used to route requests to different endpoints,depending on the location of the request source.

    By following the instructions in this guide, you will setup a two-cluster mesh as shown in the following diagram:

    Shared Istio control plane topology spanning multiple Kubernetes clusters using gateways

    Shared Istio control plane topology spanning multiple Kubernetes clusters using gateways

    The primary cluster, cluster1, runs the full set of Istio control plane components while cluster2 onlyruns Istio Citadel, Sidecar Injector, and Ingress gateway.No VPN connectivity nor direct network access between workloads in different clusters is required.

    Prerequisites

    • Two or more Kubernetes clusters with versions: 1.13, 1.14, 1.15.

    • Authority to deploy the Istio control plane

    • Two Kubernetes clusters (referred to as cluster1 and cluster2).

    The Kubernetes API server of cluster2 MUST be accessible from cluster1 in order to run this configuration.

    • 你可以使用 kubectl 命令带上 —context 参数去访问集群 cluster1cluster2,例如 kubectl get pods —context cluster1。使用如下命令列出你的上下文:
    1. $ kubectl config get-contexts
    2. CURRENT NAME CLUSTER AUTHINFO NAMESPACE
    3. * cluster1 cluster1 user@foo.com default
    4. cluster2 cluster2 user@foo.com default
    • 保存集群的上下文到环境变量:
    1. $ export CTX_CLUSTER1=$(kubectl config view -o jsonpath='{.contexts[0].name}')
    2. $ export CTX_CLUSTER2=$(kubectl config view -o jsonpath='{.contexts[1].name}')
    3. $ echo CTX_CLUSTER1 = ${CTX_CLUSTER1}, CTX_CLUSTER2 = ${CTX_CLUSTER2}
    4. CTX_CLUSTER1 = cluster1, CTX_CLUSTER2 = cluster2

    如果你有超过两个集群的上下文并且你想要使用前两个以外的集群配置你的网格,你需要手动将环境变量设置为你需要的上下文名称。

    Setup the multicluster mesh

    In this configuration you install Istio with mutual TLS enabled for both the control plane and application pods.For the shared root CA, you create a cacerts secret on both cluster1 and cluster2 clusters using the same Istiocertificate from the Istio samples directory.

    The instructions, below, also set up cluster2 with a selector-less service and an endpoint for istio-pilot.istio-systemthat has the address of cluster1 Istio ingress gateway.This will be used to access pilot on cluster1 securely using the ingress gateway without mutual TLS termination.

    Setup cluster 1 (primary)

    • Deploy Istio to cluster1:

    When you enable the additional components necessary for multicluster operation, the resource footprintof the Istio control plane may increase beyond the capacity of the default Kubernetes cluster you created whencompleting the Platform setup steps.If the Istio services aren’t getting scheduled due to insufficient CPU or memory, consideradding more nodes to your cluster or upgrading to larger memory instances as necessary.

    1. $ kubectl create --context=$CTX_CLUSTER1 ns istio-system
    2. $ kubectl create --context=$CTX_CLUSTER1 secret generic cacerts -n istio-system --from-file=samples/certs/ca-cert.pem --from-file=samples/certs/ca-key.pem --from-file=samples/certs/root-cert.pem --from-file=samples/certs/cert-chain.pem
    3. $ istioctl manifest apply --context=$CTX_CLUSTER1 \
    4. -f install/kubernetes/operator/examples/multicluster/values-istio-multicluster-primary.yaml

    Note that the gateway addresses are set to 0.0.0.0. These are temporary placeholder values that willlater be updated with the public IPs of the cluster1 and cluster2 gateways after they are deployedin the following section.

    Wait for the Istio pods on cluster1 to become ready:

    1. $ kubectl get pods --context=$CTX_CLUSTER1 -n istio-system
    2. NAME READY STATUS RESTARTS AGE
    3. istio-citadel-9bbf9b4c8-nnmbt 1/1 Running 0 2m8s
    4. istio-cleanup-secrets-1.1.0-x9crw 0/1 Completed 0 2m12s
    5. istio-galley-868c5fff5d-9ph6l 1/1 Running 0 2m9s
    6. istio-ingressgateway-6c756547b-dwc78 1/1 Running 0 2m8s
    7. istio-pilot-54fcf8db8-sn9cn 2/2 Running 0 2m8s
    8. istio-policy-5fcbd55d8b-xhbpz 2/2 Running 2 2m8s
    9. istio-security-post-install-1.1.0-ww5zz 0/1 Completed 0 2m12s
    10. istio-sidecar-injector-6dcc9d5c64-7hnnl 1/1 Running 0 2m8s
    11. istio-telemetry-57875ffb6d-n2vmf 2/2 Running 3 2m8s
    12. prometheus-66c9f5694-8pccr 1/1 Running 0 2m8s
    • Create an ingress gateway to access service(s) in cluster2:
    1. $ kubectl apply --context=$CTX_CLUSTER1 -f - <<EOF
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: Gateway
    4. metadata:
    5. name: cluster-aware-gateway
    6. namespace: istio-system
    7. spec:
    8. selector:
    9. istio: ingressgateway
    10. servers:
    11. - port:
    12. number: 443
    13. name: tls
    14. protocol: TLS
    15. tls:
    16. mode: AUTO_PASSTHROUGH
    17. hosts:
    18. - "*.local"
    19. EOF

    This Gateway configures 443 port to pass incoming traffic through to the target service specified in arequest’s SNI header, for SNI values of the local top-level domain(i.e., the Kubernetes DNS domain).Mutual TLS connections will be used all the way from the source to the destination sidecar.

    Although applied to cluster1, this Gateway instance will also affect cluster2 because both clusters communicate with thesame Pilot.

    • Determine the ingress IP and port for cluster1.

      • Set the current context of kubectl to CTX_CLUSTER1
    1. $ export ORIGINAL_CONTEXT=$(kubectl config current-context)
    2. $ kubectl config use-context $CTX_CLUSTER1
    • Follow the instructions inDetermining the ingress IP and ports,to set the INGRESS_HOST and SECURE_INGRESS_PORT environment variables.

    • Restore the previous kubectl context:

    1. $ kubectl config use-context $ORIGINAL_CONTEXT
    2. $ unset ORIGINAL_CONTEXT
    • Print the values of INGRESS_HOST and SECURE_INGRESS_PORT:
    1. $ echo The ingress gateway of cluster1: address=$INGRESS_HOST, port=$SECURE_INGRESS_PORT
    • Update the gateway address in the mesh network configuration. Edit the istio ConfigMap:
    1. $ kubectl edit cm -n istio-system --context=$CTX_CLUSTER1 istio

    Update the gateway’s address and port of network1 to reflect the cluster1 ingress host and port,respectively, then save and quit.

    Once saved, Pilot will automatically read the updated network configuration.

    Setup cluster 2

    • Export the cluster1 gateway address:
    1. $ export LOCAL_GW_ADDR=$(kubectl get --context=$CTX_CLUSTER1 svc --selector=app=istio-ingressgateway \
    2. -n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}') && echo ${LOCAL_GW_ADDR}

    This command sets the value to the gateway’s public IP and displays it.

    The command fails if the load balancer configuration doesn’t include an IP address. The implementation of DNS name support is pending.

    • Deploy Istio to cluster2:
    1. $ kubectl create --context=$CTX_CLUSTER2 ns istio-system
    2. $ kubectl create --context=$CTX_CLUSTER2 secret generic cacerts -n istio-system --from-file=samples/certs/ca-cert.pem --from-file=samples/certs/ca-key.pem --from-file=samples/certs/root-cert.pem --from-file=samples/certs/cert-chain.pem
    3. $ istioctl manifest apply --context=$CTX_CLUSTER2 \
    4. --set profile=remote \
    5. --set values.global.mtls.enabled=true \
    6. --set values.gateways.enabled=true \
    7. --set values.security.selfSigned=false \
    8. --set values.global.controlPlaneSecurityEnabled=true \
    9. --set values.global.createRemoteSvcEndpoints=true \
    10. --set values.global.remotePilotCreateSvcEndpoint=true \
    11. --set values.global.remotePilotAddress=${LOCAL_GW_ADDR} \
    12. --set values.global.remotePolicyAddress=${LOCAL_GW_ADDR} \
    13. --set values.global.remoteTelemetryAddress=${LOCAL_GW_ADDR} \
    14. --set values.gateways.istio-ingressgateway.env.ISTIO_META_NETWORK="network2" \
    15. --set values.global.network="network2"

    Wait for the Istio pods on cluster2, except for istio-ingressgateway, to become ready:

    1. $ kubectl get pods --context=$CTX_CLUSTER2 -n istio-system -l istio!=ingressgateway
    2. NAME READY STATUS RESTARTS AGE
    3. istio-citadel-75c8fcbfcf-9njn6 1/1 Running 0 12s
    4. istio-cleanup-secrets-1.1.0-vtp62 0/1 Completed 0 14s
    5. istio-sidecar-injector-cdb5d4dd5-rhks9 1/1 Running 0 12s

    istio-ingressgateway will not be ready until you configure the Istio control plane in cluster1 to watchcluster2. You do it in the next section.

    • Determine the ingress IP and port for cluster2.

      • Set the current context of kubectl to CTX_CLUSTER2
    1. $ export ORIGINAL_CONTEXT=$(kubectl config current-context)
    2. $ kubectl config use-context $CTX_CLUSTER2
    • Follow the instructions inDetermining the ingress IP and ports,to set the INGRESS_HOST and SECURE_INGRESS_PORT environment variables.

    • Restore the previous kubectl context:

    1. $ kubectl config use-context $ORIGINAL_CONTEXT
    2. $ unset ORIGINAL_CONTEXT
    • Print the values of INGRESS_HOST and SECURE_INGRESS_PORT:
    1. $ echo The ingress gateway of cluster2: address=$INGRESS_HOST, port=$SECURE_INGRESS_PORT
    • Update the gateway address in the mesh network configuration. Edit the istio ConfigMap:
    1. $ kubectl edit cm -n istio-system --context=$CTX_CLUSTER1 istio

    Update the gateway’s address and port of network2 to reflect the cluster2 ingress host and port,respectively, then save and quit.

    Once saved, Pilot will automatically read the updated network configuration.

    • Prepare environment variables for building the n2-k8s-config file for the service account istio-multi:
    1. $ CLUSTER_NAME=$(kubectl --context=$CTX_CLUSTER2 config view --minify=true -o jsonpath='{.clusters[].name}')
    2. $ SERVER=$(kubectl --context=$CTX_CLUSTER2 config view --minify=true -o jsonpath='{.clusters[].cluster.server}')
    3. $ SECRET_NAME=$(kubectl --context=$CTX_CLUSTER2 get sa istio-multi -n istio-system -o jsonpath='{.secrets[].name}')
    4. $ CA_DATA=$(kubectl get --context=$CTX_CLUSTER2 secret ${SECRET_NAME} -n istio-system -o jsonpath="{.data['ca\.crt']}")
    5. $ TOKEN=$(kubectl get --context=$CTX_CLUSTER2 secret ${SECRET_NAME} -n istio-system -o jsonpath="{.data['token']}" | base64 --decode)

    An alternative to base64 —decode is openssl enc -d -base64 -A on many systems.

    • Create the n2-k8s-config file in the working directory:
    1. $ cat <<EOF > n2-k8s-config
    2. apiVersion: v1
    3. kind: Config
    4. clusters:
    5. - cluster:
    6. certificate-authority-data: ${CA_DATA}
    7. server: ${SERVER}
    8. name: ${CLUSTER_NAME}
    9. contexts:
    10. - context:
    11. cluster: ${CLUSTER_NAME}
    12. user: ${CLUSTER_NAME}
    13. name: ${CLUSTER_NAME}
    14. current-context: ${CLUSTER_NAME}
    15. users:
    16. - name: ${CLUSTER_NAME}
    17. user:
    18. token: ${TOKEN}
    19. EOF

    Start watching cluster 2

    • Execute the following commands to add and label the secret of the cluster2 Kubernetes.After executing these commands Istio Pilot on cluster1 will begin watching cluster2 for services and instances,just as it does for cluster1.
    1. $ kubectl create --context=$CTX_CLUSTER1 secret generic n2-k8s-secret --from-file n2-k8s-config -n istio-system
    2. $ kubectl label --context=$CTX_CLUSTER1 secret n2-k8s-secret istio/multiCluster=true -n istio-system
    • Wait for istio-ingressgateway to become ready:
    1. $ kubectl get pods --context=$CTX_CLUSTER2 -n istio-system -l istio=ingressgateway
    2. NAME READY STATUS RESTARTS AGE
    3. istio-ingressgateway-5c667f4f84-bscff 1/1 Running 0 16m

    Now that you have your cluster1 and cluster2 clusters set up, you can deploy an example service.

    Deploy example service

    As shown in the diagram, above, deploy two instances of the helloworld service,one on cluster1 and one on cluster2.The difference between the two instances is the version of their helloworld image.

    Deploy helloworld v2 in cluster 2

    • Create a sample namespace with a sidecar auto-injection label:
    1. $ kubectl create --context=$CTX_CLUSTER2 ns sample
    2. $ kubectl label --context=$CTX_CLUSTER2 namespace sample istio-injection=enabled
    • Deploy helloworld v2:

    ZipZip

    1. $ kubectl create --context=$CTX_CLUSTER2 -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample
    2. $ kubectl create --context=$CTX_CLUSTER2 -f @samples/helloworld/helloworld.yaml@ -l version=v2 -n sample
    • Confirm helloworld v2 is running:
    1. $ kubectl get po --context=$CTX_CLUSTER2 -n sample
    2. NAME READY STATUS RESTARTS AGE
    3. helloworld-v2-7dd57c44c4-f56gq 2/2 Running 0 35s

    Deploy helloworld v1 in cluster 1

    • Create a sample namespace with a sidecar auto-injection label:
    1. $ kubectl create --context=$CTX_CLUSTER1 ns sample
    2. $ kubectl label --context=$CTX_CLUSTER1 namespace sample istio-injection=enabled
    • Deploy helloworld v1:

    ZipZip

    1. $ kubectl create --context=$CTX_CLUSTER1 -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample
    2. $ kubectl create --context=$CTX_CLUSTER1 -f @samples/helloworld/helloworld.yaml@ -l version=v1 -n sample
    • Confirm helloworld v1 is running:
    1. $ kubectl get po --context=$CTX_CLUSTER1 -n sample
    2. NAME READY STATUS RESTARTS AGE
    3. helloworld-v1-d4557d97b-pv2hr 2/2 Running 0 40s

    Cross-cluster routing in action

    To demonstrate how traffic to the helloworld service is distributed across the two clusters,call the helloworld service from another in-mesh sleep service.

    • Deploy the sleep service in both clusters:

    ZipZip

    1. $ kubectl apply --context=$CTX_CLUSTER1 -f @samples/sleep/sleep.yaml@ -n sample
    2. $ kubectl apply --context=$CTX_CLUSTER2 -f @samples/sleep/sleep.yaml@ -n sample
    • Wait for the sleep service to start in each cluster:
    1. $ kubectl get po --context=$CTX_CLUSTER1 -n sample -l app=sleep
    2. sleep-754684654f-n6bzf 2/2 Running 0 5s
    1. $ kubectl get po --context=$CTX_CLUSTER2 -n sample -l app=sleep
    2. sleep-754684654f-dzl9j 2/2 Running 0 5s
    • Call the helloworld.sample service several times from cluster1 :
    1. $ kubectl exec --context=$CTX_CLUSTER1 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
    • Call the helloworld.sample service several times from cluster2 :
    1. $ kubectl exec --context=$CTX_CLUSTER2 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER2 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello

    If set up correctly, the traffic to the helloworld.sample service will be distributed between instances on cluster1 and cluster2resulting in responses with either v1 or v2 in the body:

    1. Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
    2. Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv

    You can also verify the IP addresses used to access the endpoints by printing the log of the sleep’s istio-proxy container.

    1. $ kubectl logs --context=$CTX_CLUSTER1 -n sample $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') istio-proxy
    2. [2018-11-25T12:37:52.077Z] "GET /hello HTTP/1.1" 200 - 0 60 190 189 "-" "curl/7.60.0" "6e096efe-f550-4dfa-8c8c-ba164baf4679" "helloworld.sample:5000" "192.23.120.32:15443" outbound|5000||helloworld.sample.svc.cluster.local - 10.20.194.146:5000 10.10.0.89:59496 -
    3. [2018-11-25T12:38:06.745Z] "GET /hello HTTP/1.1" 200 - 0 60 171 170 "-" "curl/7.60.0" "6f93c9cc-d32a-4878-b56a-086a740045d2" "helloworld.sample:5000" "10.10.0.90:5000" outbound|5000||helloworld.sample.svc.cluster.local - 10.20.194.146:5000 10.10.0.89:59646 -

    In cluster1, the gateway IP of cluster2 (192.23.120.32:15443) is logged when v2 was called and the instance IP in cluster1 (10.10.0.90:5000) is logged when v1 was called.

    1. $ kubectl logs --context=$CTX_CLUSTER2 -n sample $(kubectl get pod --context=$CTX_CLUSTER2 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') istio-proxy
    2. [2019-05-25T08:06:11.468Z] "GET /hello HTTP/1.1" 200 - "-" 0 60 177 176 "-" "curl/7.60.0" "58cfb92b-b217-4602-af67-7de8f63543d8" "helloworld.sample:5000" "192.168.1.246:15443" outbound|5000||helloworld.sample.svc.cluster.local - 10.107.117.235:5000 10.32.0.10:36840 -
    3. [2019-05-25T08:06:12.834Z] "GET /hello HTTP/1.1" 200 - "-" 0 60 181 180 "-" "curl/7.60.0" "ce480b56-fafd-468b-9996-9fea5257cb1e" "helloworld.sample:5000" "10.32.0.9:5000" outbound|5000||helloworld.sample.svc.cluster.local - 10.107.117.235:5000 10.32.0.10:36886 -

    In cluster2, the gateway IP of cluster1 (192.168.1.246:15443) is logged when v1 was called and the gateway IP in cluster2 (10.32.0.9:5000) is logged when v2 was called.

    Cleanup

    Execute the following commands to clean up the example services and the Istio components.

    Cleanup the cluster2 cluster:

    1. $ kubectl delete --context=$CTX_CLUSTER2 -f istio-remote-auth.yaml
    2. $ kubectl delete --context=$CTX_CLUSTER2 ns istio-system
    3. $ kubectl delete --context=$CTX_CLUSTER2 ns sample
    4. $ unset CTX_CLUSTER2 CLUSTER_NAME SERVER SECRET_NAME CA_DATA TOKEN INGRESS_HOST SECURE_INGRESS_PORT INGRESS_PORT
    5. $ rm istio-remote-auth.yaml

    Cleanup the cluster1 cluster:

    1. $ kubectl delete --context=$CTX_CLUSTER1 -f istio-auth.yaml
    2. $ kubectl delete --context=$CTX_CLUSTER1 ns istio-system
    3. $ for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl delete --context=$CTX_CLUSTER1 -f $i; done
    4. $ kubectl delete --context=$CTX_CLUSTER1 ns sample
    5. $ unset CTX_CLUSTER1
    6. $ rm istio-auth.yaml n2-k8s-config

    相关内容

    Google Kubernetes Engine

    Set up a multicluster mesh over two GKE clusters.

    IBM Cloud Private

    Example multicluster mesh over two IBM Cloud Private clusters.

    Shared control plane (single-network)

    Install an Istio mesh across multiple Kubernetes clusters with a shared control plane and VPN connectivity between clusters.

    控制平面副本集

    通过控制平面副本集实例,在多个 Kubernetes 集群上安装 Istio 网格。

    Multi-mesh deployments for isolation and boundary protection

    Deploy environments that require isolation into separate meshes and enable inter-mesh communication by mesh federation.

    Version Routing in a Multicluster Service Mesh

    Configuring Istio route rules in a multicluster service mesh.