• Replicated control planes
    • Prerequisites
    • Deploy the Istio control plane in each cluster
    • Setup DNS
    • Configure application services
      • Configure the example services
      • Send remote traffic via an egress gateway
      • Cleanup the example
    • Version-aware routing to remote services
    • Uninstalling
    • Summary
    • See also

    Replicated control planes

    Follow this guide to install an Istiomulticluster deploymentwith replicated control plane instancesin every cluster and using gateways to connect services across clusters.

    Instead of using a shared Istio control plane to manage the mesh,in this configuration each cluster has its own Istio control planeinstallation, each managing its own endpoints.All of the clusters are under a shared administrative control for the purposes ofpolicy enforcement and security.

    A single Istio service mesh across the clusters is achieved by replicatingshared services and namespaces and using a common root CA in all of the clusters.Cross-cluster communication occurs over Istio gateways of the respective clusters.

    Istio mesh spanning multiple Kubernetes clusters using Istio Gateway to reach remote pods

    Istio mesh spanning multiple Kubernetes clusters using Istio Gateway to reach remote pods

    Prerequisites

    • Two or more Kubernetes clusters with versions: 1.13, 1.14, 1.15.

    • Authority to deploy the Istio control planeon each Kubernetes cluster.

    • The IP address of the istio-ingressgateway service in each cluster must be accessiblefrom every other cluster, ideally using L4 network load balancers (NLB).Not all cloud providers support NLBs and some require special annotations to use them,so please consult your cloud provider’s documentation for enabling NLBs forservice object type load balancers. When deploying on platforms withoutNLB support, it may be necessary to modify the health checks for the loadbalancer to register the ingress gateway.

    • A Root CA. Cross cluster communication requires mutual TLS connectionbetween services. To enable mutual TLS communication across clusters, eachcluster’s Citadel will be configured with intermediate CA credentialsgenerated by a shared root CA. For illustration purposes, you use asample root CA certificate available in the Istio installationunder the samples/certs directory.

    Deploy the Istio control plane in each cluster

    • Generate intermediate CA certificates for each cluster’s Citadel from yourorganization’s root CA. The shared root CA enables mutual TLS communicationacross different clusters.

    For illustration purposes, the following instructions use the certificatesfrom the Istio samples directory for both clusters. In real world deployments,you would likely use a different CA certificate for each cluster, all signedby a common root CA.

    • Run the following commands in every cluster to deploy an identical Istio control planeconfiguration in all of them.

    Make sure that the current user has cluster administrator (cluster-admin) permissions and grant them if not.On the GKE platform, for example, the following command can be used:

    1. $ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="$(gcloud config get-value core/account)"
    • Create a Kubernetes secret for your generated CA certificates using a command similar to the following. See Certificate Authority (CA) certificates for more details.

    The root and intermediate certificate from the samples directory are widelydistributed and known. Do not use these certificates in production asyour clusters would then be open to security vulnerabilities and compromise.

    ZipZipZipZip

    1. $ kubectl create namespace istio-system
    2. $ kubectl create secret generic cacerts -n istio-system \
    3. --from-file=@samples/certs/ca-cert.pem@ \
    4. --from-file=@samples/certs/ca-key.pem@ \
    5. --from-file=@samples/certs/root-cert.pem@ \
    6. --from-file=@samples/certs/cert-chain.pem@
    • Install Istio:
    1. $ istioctl manifest apply \
    2. -f install/kubernetes/operator/examples/multicluster/values-istio-multicluster-gateways.yaml

    For further details and customization options, refer to theinstallation instructions.

    Setup DNS

    Providing DNS resolution for services in remote clusters will allowexisting applications to function unmodified, as applications typicallyexpect to resolve services by their DNS names and access the resultingIP. Istio itself does not use the DNS for routing requests betweenservices. Services local to a cluster share a common DNS suffix(e.g., svc.cluster.local). Kubernetes DNS provides DNS resolution for theseservices.

    To provide a similar setup for services from remote clusters, you nameservices from remote clusters in the format<name>.<namespace>.global. Istio also ships with a CoreDNS server thatwill provide DNS resolution for these services. In order to utilize thisDNS, Kubernetes’ DNS must be configured to stub a domain for .global.

    Some cloud providers have different specific DNS domain stub capabilitiesand procedures for their Kubernetes services. Reference the cloud provider’sdocumentation to determine how to stub DNS domains for each uniqueenvironment. The objective of this bash is to stub a domain for .global onport 53 to reference or proxy the istiocoredns service in Istio’s servicenamespace.

    Create one of the following ConfigMaps, or update an existing one, in eachcluster that will be calling services in remote clusters(every cluster in the general case):

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: v1
    3. kind: ConfigMap
    4. metadata:
    5. name: kube-dns
    6. namespace: kube-system
    7. data:
    8. stubDomains: |
    9. {"global": ["$(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})"]}
    10. EOF
    1. $ kubectl apply -f - <<EOF
    2. apiVersion: v1
    3. kind: ConfigMap
    4. metadata:
    5. name: coredns
    6. namespace: kube-system
    7. data:
    8. Corefile: |
    9. .:53 {
    10. errors
    11. health
    12. kubernetes cluster.local in-addr.arpa ip6.arpa {
    13. pods insecure
    14. upstream
    15. fallthrough in-addr.arpa ip6.arpa
    16. }
    17. prometheus :9153
    18. proxy . /etc/resolv.conf
    19. cache 30
    20. loop
    21. reload
    22. loadbalance
    23. }
    24. global:53 {
    25. errors
    26. cache 30
    27. proxy . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})
    28. }
    29. EOF
    1. $ kubectl apply -f - <<EOF
    2. apiVersion: v1
    3. kind: ConfigMap
    4. metadata:
    5. name: coredns
    6. namespace: kube-system
    7. data:
    8. Corefile: |
    9. .:53 {
    10. errors
    11. health
    12. kubernetes cluster.local in-addr.arpa ip6.arpa {
    13. pods insecure
    14. upstream
    15. fallthrough in-addr.arpa ip6.arpa
    16. }
    17. prometheus :9153
    18. forward . /etc/resolv.conf
    19. cache 30
    20. loop
    21. reload
    22. loadbalance
    23. }
    24. global:53 {
    25. errors
    26. cache 30
    27. forward . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})
    28. }
    29. EOF

    Configure application services

    Every service in a given cluster that needs to be accessed from a different remotecluster requires a ServiceEntry configuration in the remote cluster.The host used in the service entry should be of the form <name>.<namespace>.globalwhere name and namespace correspond to the service’s name and namespace respectively.

    To demonstrate cross cluster access, configure thesleep servicerunning in one cluster to call the httpbin servicerunning in a second cluster. Before you begin:

    • Choose two of your Istio clusters, to be referred to as cluster1 and cluster2.
    • You can use the kubectl command to access both the cluster1 and cluster2 clusters with the —context flag,for example kubectl get pods —context cluster1.Use the following command to list your contexts:
    1. $ kubectl config get-contexts
    2. CURRENT NAME CLUSTER AUTHINFO NAMESPACE
    3. * cluster1 cluster1 user@foo.com default
    4. cluster2 cluster2 user@foo.com default
    • Store the context names of your clusters in environment variables:
    1. $ export CTX_CLUSTER1=$(kubectl config view -o jsonpath='{.contexts[0].name}')
    2. $ export CTX_CLUSTER2=$(kubectl config view -o jsonpath='{.contexts[1].name}')
    3. $ echo CTX_CLUSTER1 = ${CTX_CLUSTER1}, CTX_CLUSTER2 = ${CTX_CLUSTER2}
    4. CTX_CLUSTER1 = cluster1, CTX_CLUSTER2 = cluster2

    If you have more than two clusters in the context list and you want to configure your mesh using clusters other thanthe first two, you will need to manually set the environment variables to the appropriate context names.

    Configure the example services

    • Deploy the sleep service in cluster1.

    Zip

    1. $ kubectl create --context=$CTX_CLUSTER1 namespace foo
    2. $ kubectl label --context=$CTX_CLUSTER1 namespace foo istio-injection=enabled
    3. $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@
    4. $ export SLEEP_POD=$(kubectl get --context=$CTX_CLUSTER1 -n foo pod -l app=sleep -o jsonpath={.items..metadata.name})
    • Deploy the httpbin service in cluster2.

    Zip

    1. $ kubectl create --context=$CTX_CLUSTER2 namespace bar
    2. $ kubectl label --context=$CTX_CLUSTER2 namespace bar istio-injection=enabled
    3. $ kubectl apply --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@
    • Export the cluster2 gateway address:
    1. $ export CLUSTER2_GW_ADDR=$(kubectl get --context=$CTX_CLUSTER2 svc --selector=app=istio-ingressgateway \
    2. -n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')

    This command sets the value to the gateway’s public IP, but note that you can set it toa DNS name instead, if you have one.

    If cluster2 is running in an environment that does notsupport external load balancers, you will need to use a nodePort to access the gateway.Instructions for obtaining the IP to use can be found in theControl Ingress Trafficguide. You will also need to change the service entry endpoint port in the following step from 15443to its corresponding nodePort(i.e., kubectl —context=$CTX_CLUSTER2 get svc -n istio-system istio-ingressgateway -o=jsonpath='{.spec.ports[?(@.port==15443)].nodePort}').

    • Create a service entry for the httpbin service in cluster1.

    To allow sleep in cluster1 to access httpbin in cluster2, we need to createa service entry for it. The host name of the service entry should be of the form<name>.<namespace>.global where name and namespace correspond to theremote service’s name and namespace respectively.

    For DNS resolution for services under the *.global domain, you need to assign theseservices an IP address.

    Each service (in the .global DNS domain) must have a unique IP within the cluster.

    If the global services have actual VIPs, you can use those, but otherwise we suggestusing IPs from the class E addresses range 240.0.0.0/4.Application traffic for these IPs will be captured by the sidecar and routed to theappropriate remote service.

    Multicast addresses (224.0.0.0 ~ 239.255.255.255) should not be used because there is no route to them by default.Loopback addresses (127.0.0.0/8) should also not be used because traffic sent to them may be redirected to the sidecar inbound listener.

    1. $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: ServiceEntry
    4. metadata:
    5. name: httpbin-bar
    6. spec:
    7. hosts:
    8. # must be of form name.namespace.global
    9. - httpbin.bar.global
    10. # Treat remote cluster services as part of the service mesh
    11. # as all clusters in the service mesh share the same root of trust.
    12. location: MESH_INTERNAL
    13. ports:
    14. - name: http1
    15. number: 8000
    16. protocol: http
    17. resolution: DNS
    18. addresses:
    19. # the IP address to which httpbin.bar.global will resolve to
    20. # must be unique for each remote service, within a given cluster.
    21. # This address need not be routable. Traffic for this IP will be captured
    22. # by the sidecar and routed appropriately.
    23. - 240.0.0.2
    24. endpoints:
    25. # This is the routable address of the ingress gateway in cluster2 that
    26. # sits in front of sleep.foo service. Traffic from the sidecar will be
    27. # routed to this address.
    28. - address: ${CLUSTER2_GW_ADDR}
    29. ports:
    30. http1: 15443 # Do not change this port value
    31. EOF

    The configurations above will result in all traffic in cluster1 forhttpbin.bar.global on any port to be routed to the endpoint<IPofCluster2IngressGateway>:15443 over a mutual TLS connection.

    The gateway for port 15443 is a special SNI-aware Envoypreconfigured and installed when you deployed the Istio control plane in the cluster.Traffic entering port 15443 will beload balanced among pods of the appropriate internal service of the targetcluster (in this case, httpbin.bar in cluster2).

    Do not create a Gateway configuration for port 15443.

    • Verify that httpbin is accessible from the sleep service.
    1. $ kubectl exec --context=$CTX_CLUSTER1 $SLEEP_POD -n foo -c sleep -- curl -I httpbin.bar.global:8000/headers

    Send remote traffic via an egress gateway

    If you want to route traffic from cluster1 via a dedicated egress gateway, instead of directly from the sidecars,use the following service entry for httpbin.bar instead of the one in the previous section.

    The egress gateway used in this configuration cannot also be used for other, non inter-cluster, egress traffic.

    If $CLUSTER2_GW_ADDR is an IP address, use the $CLUSTER2_GW_ADDR - IP address option. If $CLUSTER2_GW_ADDR is a hostname, use the $CLUSTER2_GW_ADDR - hostname option.

    • Export the cluster1 egress gateway address:
    1. $ export CLUSTER1_EGW_ADDR=$(kubectl get --context=$CTX_CLUSTER1 svc --selector=app=istio-egressgateway \
    2. -n istio-system -o yaml -o jsonpath='{.items[0].spec.clusterIP}')
    • Apply the httpbin-bar service entry:
    1. $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: ServiceEntry
    4. metadata:
    5. name: httpbin-bar
    6. spec:
    7. hosts:
    8. # must be of form name.namespace.global
    9. - httpbin.bar.global
    10. location: MESH_INTERNAL
    11. ports:
    12. - name: http1
    13. number: 8000
    14. protocol: http
    15. resolution: STATIC
    16. addresses:
    17. - 240.0.0.2
    18. endpoints:
    19. - address: ${CLUSTER2_GW_ADDR}
    20. network: external
    21. ports:
    22. http1: 15443 # Do not change this port value
    23. - address: ${CLUSTER1_EGW_ADDR}
    24. ports:
    25. http1: 15443
    26. EOF

    If the ${CLUSTER2_GW_ADDR} is a hostname, you can use resolution: DNS for the endpoint resolution:

    1. $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: ServiceEntry
    4. metadata:
    5. name: httpbin-bar
    6. spec:
    7. hosts:
    8. # must be of form name.namespace.global
    9. - httpbin.bar.global
    10. location: MESH_INTERNAL
    11. ports:
    12. - name: http1
    13. number: 8000
    14. protocol: http
    15. resolution: DNS
    16. addresses:
    17. - 240.0.0.2
    18. endpoints:
    19. - address: ${CLUSTER2_GW_ADDR}
    20. network: external
    21. ports:
    22. http1: 15443 # Do not change this port value
    23. - address: istio-egressgateway.istio-system.svc.cluster.local
    24. ports:
    25. http1: 15443
    26. EOF

    Cleanup the example

    Execute the following commands to clean up the example services.

    • Cleanup cluster1:

    Zip

    1. $ kubectl delete --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@
    2. $ kubectl delete --context=$CTX_CLUSTER1 -n foo serviceentry httpbin-bar
    3. $ kubectl delete --context=$CTX_CLUSTER1 ns foo
    • Cleanup cluster2:

    Zip

    1. $ kubectl delete --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@
    2. $ kubectl delete --context=$CTX_CLUSTER2 ns bar
    • Cleanup environment variables:
    1. $ unset SLEEP_POD CLUSTER2_GW_ADDR CLUSTER1_EGW_ADDR CTX_CLUSTER1 CTX_CLUSTER2

    Version-aware routing to remote services

    If the remote service has multiple versions, you can addlabels to the service entry endpoints.For example:

    1. $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: ServiceEntry
    4. metadata:
    5. name: httpbin-bar
    6. spec:
    7. hosts:
    8. # must be of form name.namespace.global
    9. - httpbin.bar.global
    10. location: MESH_INTERNAL
    11. ports:
    12. - name: http1
    13. number: 8000
    14. protocol: http
    15. resolution: DNS
    16. addresses:
    17. # the IP address to which httpbin.bar.global will resolve to
    18. # must be unique for each service.
    19. - 240.0.0.2
    20. endpoints:
    21. - address: ${CLUSTER2_GW_ADDR}
    22. labels:
    23. cluster: cluster2
    24. ports:
    25. http1: 15443 # Do not change this port value
    26. EOF

    You can then create virtual services and destination rulesto define subsets of the httpbin.bar.global service using the appropriate gateway label selectors.The instructions are the same as those used for routing to a local service.See multicluster version routingfor a complete example.

    Uninstalling

    Uninstall Istio by running the following commands on every cluster:

    1. $ istioctl manifest generate \
    2. -f install/kubernetes/operator/examples/multicluster/values-istio-multicluster-gateways.yaml \
    3. | kubectl delete -f -

    Summary

    Using Istio gateways, a common root CA, and service entries, you can configurea single Istio service mesh across multiple Kubernetes clusters.Once configured this way, traffic can be transparently routed to remote clusterswithout any application involvement.Although this approach requires a certain amount of manual configuration forremote service access, the service entry creation process could be automated.

    See also

    Multi-Mesh Deployments for Isolation and Boundary Protection

    Deploy environments that require isolation into separate meshes and enable inter-mesh communication by mesh federation.

    Google Kubernetes Engine

    Set up a multicluster mesh over two GKE clusters.

    IBM Cloud Private

    Example multicluster mesh over two IBM Cloud Private clusters.

    Shared control plane (multi-network)

    Install an Istio mesh across multiple Kubernetes clusters using a shared control plane for disconnected cluster networks.

    Shared control plane (single-network)

    Install an Istio mesh across multiple Kubernetes clusters with a shared control plane and VPN connectivity between clusters.

    Simplified Multicluster Install [Experimental]

    Configure an Istio mesh spanning multiple Kubernetes clusters.