• Virtual Machines in Multi-Network Meshes
    • Prerequisites
    • Installation steps
      • Customized installation of Istio on the cluster
      • Setup DNS
      • Setting up the VM
    • Added Istio resources
    • Expose service running on cluster to VMs
    • Send requests from VM to Kubernetes services
    • Running services on the added VM
    • Cleanup
    • See also

    Virtual Machines in Multi-Network Meshes

    This example provides instructions to integrate a VM or a bare metal host into amulti-network Istio mesh deployed on Kubernetes using gateways. This approachdoesn’t require VPN connectivity or direct network access between the VM, thebare metal and the clusters.

    Prerequisites

    • One or more Kubernetes clusters with versions: 1.13, 1.14, 1.15.

    • Virtual machines (VMs) must have IP connectivity to the Ingress gateways in the mesh.

    • Install the Helm client. Helm is needed toadd VMs to your mesh.

    Installation steps

    Setup consists of preparing the mesh for expansion and installing and configuring each VM.

    Customized installation of Istio on the cluster

    The first step when adding non-Kubernetes services to an Istio mesh is toconfigure the Istio installation itself, and generate the configuration filesthat let VMs connect to the mesh. Prepare the cluster for the VM with thefollowing commands on a machine with cluster admin privileges:

    • Generate a meshexpansion-gateways Istio configuration file using helm:
    1. $ helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
    2. -f https://github.com/irisdingbj/meshExpansion/blob/master/values-istio-meshexpansion-gateways.yaml \ > $HOME/istio-mesh-expansion-gatways.yaml

    For further details and customization options, refer to theInstallation with Helm instructions.

    • Deploy Istio control plane into the cluster
    1. $ kubectl create namespace istio-system
    2. $ helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
    3. $ kubectl apply -f $HOME/istio-mesh-expansion-gatways.yaml
    • Verify Istio is installed successfully
    1. $ istioctl verify-install -f $HOME/istio-mesh-expansion-gatways.yaml
    • Create vm namespace for the VM services.
    1. $ kubectl create ns vm
    • Define the namespace the VM joins. This example uses the SERVICE_NAMESPACE environment variable to store the namespace. The value of this variable must match the namespace you use in the configuration files later on.
    1. $ export SERVICE_NAMESPACE="vm"
    • Extract the initial keys the service account needs to use on the VMs.
    1. $ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
    2. -o jsonpath='{.data.root-cert\.pem}' | base64 --decode > root-cert.pem
    3. $ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
    4. -o jsonpath='{.data.key\.pem}' | base64 --decode > key.pem
    5. $ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
    6. -o jsonpath='{.data.cert-chain\.pem}' | base64 --decode > cert-chain.pem
    • Determine and store the IP address of the Istio ingress gateway since theVMs access Citadel andPilot and workloads on cluster throughthis IP address.
    1. $ export GWIP=$(kubectl get -n istio-system service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    2. $ echo $GWIP
    3. 35.232.112.158
    • Generate a cluster.env configuration to deploy in the VMs. This file contains the Kubernetes cluster IP address rangesto intercept and redirect via Envoy.
    1. $ echo -e "ISTIO_CP_AUTH=MUTUAL_TLS\nISTIO_SERVICE_CIDR=$ISTIO_SERVICE_CIDR\n" > cluster.env
    • Check the contents of the generated cluster.env file. It should be similar to the following example:
    1. $ cat cluster.env
    2. ISTIO_CP_AUTH=MUTUAL_TLS
    3. ISTIO_SERVICE_CIDR=172.21.0.0/16

    Setup DNS

    Providing DNS resolution to allow services running on VM can access theservices running in the cluster. Istio itself does not use the DNS forrouting requests between services. Services local to a cluster share acommon DNS suffix(e.g., svc.cluster.local). Kubernetes DNS providesDNS resolution for these services.

    To provide a similar setup to allow services accessible from VMs, you nameservices in the clusters in the format<name>.<namespace>.global. Istio also ships with a CoreDNS server thatwill provide DNS resolution for these services. In order to utilize thisDNS, Kubernetes’ DNS must be configured to stub a domain for .global.

    Some cloud providers have different specific DNS domain stub capabilitiesand procedures for their Kubernetes services. Reference the cloud provider’sdocumentation to determine how to stub DNS domains for each uniqueenvironment. The objective of this bash is to stub a domain for .global onport 53 to reference or proxy the istiocoredns service in Istio’s servicenamespace.

    Create one of the following ConfigMaps, or update an existing one, in eachcluster that will be calling services in remote clusters(every cluster in the general case):

    For clusters that use kube-dns:

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: v1
    3. kind: ConfigMap
    4. metadata:
    5. name: kube-dns
    6. namespace: kube-system
    7. data:
    8. stubDomains: |
    9. {"global": ["$(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})"]}
    10. EOF

    For clusters that use CoreDNS:

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: v1
    3. kind: ConfigMap
    4. metadata:
    5. name: coredns
    6. namespace: kube-system
    7. data:
    8. Corefile: |
    9. .:53 {
    10. errors
    11. health
    12. kubernetes cluster.local in-addr.arpa ip6.arpa {
    13. pods insecure
    14. upstream
    15. fallthrough in-addr.arpa ip6.arpa
    16. }
    17. prometheus :9153
    18. proxy . /etc/resolv.conf
    19. cache 30
    20. loop
    21. reload
    22. loadbalance
    23. }
    24. global:53 {
    25. errors
    26. cache 30
    27. proxy . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})
    28. }
    29. EOF

    Setting up the VM

    Next, run the following commands on each machine that you want to add to the mesh:

    • Copy the previously created cluster.env and *.pem files to the VM.

    • Install the Debian package with the Envoy sidecar.

    1. $ curl -L https://storage.googleapis.com/istio-release/releases/1.4.2/deb/istio-sidecar.deb > istio-sidecar.deb
    2. $ sudo dpkg -i istio-sidecar.deb
    • Add the IP address of the Istio gateway to /etc/hosts. Revisit the Customized installation of Istio on the Cluster section to learn how to obtain the IP address.The following example updates the /etc/hosts file with the Istio gateway address:
    1. $ echo "35.232.112.158 istio-citadel istio-pilot istio-pilot.istio-system" | sudo tee -a /etc/hosts
    • Install root-cert.pem, key.pem and cert-chain.pem under /etc/certs/.
    1. $ sudo mkdir -p /etc/certs
    2. $ sudo cp {root-cert.pem,cert-chain.pem,key.pem} /etc/certs
    • Install cluster.env under /var/lib/istio/envoy/.
    1. $ sudo cp cluster.env /var/lib/istio/envoy
    • Transfer ownership of the files in /etc/certs/ and /var/lib/istio/envoy/ to the Istio proxy.
    1. $ sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy
    • Verify the node agent works:
    1. $ sudo node_agent
    2. ....
    3. CSR is approved successfully. Will renew cert in 1079h59m59.84568493s
    • Start Istio using systemctl.
    1. $ sudo systemctl start istio-auth-node-agent
    2. $ sudo systemctl start istio

    Added Istio resources

    The Istio resources below are added to support adding VMs to the mesh withgateways. These resources remove the flat network requirement between the VM andcluster.

    Resource KindResource NameFunction
    configmapcorednsSend .global request to istiocordns service
    serviceistiocorednsResolve .global to Istio Ingress gateway
    gateway.networking.istio.iomeshexpansion-gatewayOpen port for Pilot, Citadel and Mixer
    gateway.networking.istio.ioistio-multicluster-egressgatewayOpen port 15443 for outbound .global traffic
    gateway.networking.istio.ioistio-multicluster-ingressgatewayOpen port 15443 for inbound .global traffic
    envoyfilter.networking.istio.ioistio-multicluster-ingressgatewayTransform .global to . svc.cluster.local
    destinationrule.networking.istio.ioistio-multicluster-destinationruleSet traffic policy for 15443 traffic
    destinationrule.networking.istio.iomeshexpansion-dr-pilotSet traffic policy for istio-pilot
    destinationrule.networking.istio.ioistio-policySet traffic policy for istio-policy
    destinationrule.networking.istio.ioistio-telemetrySet traffic policy for istio-telemetry
    virtualservice.networking.istio.iomeshexpansion-vs-pilotSet route info for istio-pilot
    virtualservice.networking.istio.iomeshexpansion-vs-citadelSet route info for istio-citadel

    Expose service running on cluster to VMs

    Every service in the cluster that needs to be accessed from the VM requires a service entry configuration in the cluster. The host used in the service entry should be of the form <name>.<namespace>.global where name and namespace correspond to the service’s name and namespace respectively.

    To demonstrate access from VM to cluster services, configure thethe httpbin servicein the cluster.

    • Deploy the httpbin service in the cluster

    Zip

    1. $ kubectl create namespace bar
    2. $ kubectl label namespace bar istio-injection=enabled
    3. $ kubectl apply -n bar -f @samples/httpbin/httpbin.yaml@
    • Create a service entry for the httpbin service in the cluster.

    To allow services in VM to access httpbin in the cluster, we need to createa service entry for it. The host name of the service entry should be of the form<name>.<namespace>.global where name and namespace correspond to theremote service’s name and namespace respectively.

    For DNS resolution for services under the *.global domain, you need to assign theseservices an IP address.

    Each service (in the .global DNS domain) must have a unique IP within the cluster.

    If the global services have actual VIPs, you can use those, but otherwise we suggestusing IPs from the loopback range 127.0.0.0/8 that are not already allocated.These IPs are non-routable outside of a pod.In this example we’ll use IPs in 127.255.0.0/16 which avoids conflicting withwell known IPs such as 127.0.0.1 (localhost).Application traffic for these IPs will be captured by the sidecar and routed to theappropriate remote service.

    1. $ kubectl apply -n bar -f - <<EOF
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: ServiceEntry
    4. metadata:
    5. name: httpbin.bar.forvms
    6. spec:
    7. hosts:
    8. # must be of form name.namespace.global
    9. - httpbin.bar.global
    10. location: MESH_INTERNAL
    11. ports:
    12. - name: http1
    13. number: 8000
    14. protocol: http
    15. resolution: DNS
    16. addresses:
    17. # the IP address to which httpbin.bar.global will resolve to
    18. # must be unique for each service, within a given cluster.
    19. # This address need not be routable. Traffic for this IP will be captured
    20. # by the sidecar and routed appropriately.
    21. # This address will also be added into VM's /etc/hosts
    22. - 127.255.0.3
    23. endpoints:
    24. # This is the routable address of the ingress gateway in the cluster.
    25. # Traffic from the VMs will be
    26. # routed to this address.
    27. - address: ${CLUSTER_GW_ADDR}
    28. ports:
    29. http1: 15443 # Do not change this port value
    30. EOF

    The configurations above will result in all traffic from VMs forhttpbin.bar.global on any port to be routed to the endpoint<IPofClusterIngressGateway>:15443 over a mutual TLS connection.

    The gateway for port 15443 is a special SNI-aware Envoypreconfigured and installed as part of the meshexpansion with gateway Istio installation stepin the Customized installation of Istio on the Cluster section. Traffic entering port 15443 will beload balanced among pods of the appropriate internal service of the targetcluster (in this case, httpbin.bar in the cluster).

    Do not create a Gateway configuration for port 15443.

    Send requests from VM to Kubernetes services

    After setup, the machine can access services running in the Kubernetes cluster.

    The following example shows accessing a service running in the Kubernetescluster from a VM using /etc/hosts/, in this case using aservice from the httpbin service.

    • On the added VM, add the service name and address to its /etc/hosts file.You can then connect to the cluster service from the VM, as in the examplebelow:
    1. $ echo "127.255.0.3 httpbin.bar.global" | sudo tee -a /etc/hosts
    2. $ curl -v httpbin.bar.global:8000
    3. < HTTP/1.1 200 OK
    4. < server: envoy
    5. < content-type: text/html; charset=utf-8
    6. < content-length: 9593
    7. ... html content ...

    The server: envoy header indicates that the sidecar intercepted the traffic.

    Running services on the added VM

    • Setup an HTTP server on the VM instance to serve HTTP traffic on port 8888:
    1. $ python -m SimpleHTTPServer 8888
    • Determine the VM instance’s IP address.

    • Configure a service entry to enable service discovery for the VM. You can add VM services to the mesh using aservice entry. Service entries let you manually addadditional services to Pilot’s abstract model of the mesh. Once VM services are part of the mesh’s abstract model,other services can find and direct traffic to them. Each service entry configuration contains the IP addresses, ports,and appropriate labels of all VMs exposing a particular service, for example:

    1. $ kubectl -n ${SERVICE_NAMESPACE} apply -f - <<EOF
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: ServiceEntry
    4. metadata:
    5. name: vmhttp
    6. spec:
    7. hosts:
    8. - vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local
    9. ports:
    10. - number: 8888
    11. name: http
    12. protocol: HTTP
    13. resolution: STATIC
    14. endpoints:
    15. - address: ${VM_IP}
    16. ports:
    17. http: 8888
    18. labels:
    19. app: vmhttp
    20. version: "v1"
    21. EOF
    • The workloads in a Kubernetes cluster need a DNS mapping to resolve the domain names of VM services. Tointegrate the mapping with your own DNS system, use istioctl register and creates a Kubernetes selector-lessservice, for example:
    1. $ istioctl register -n ${SERVICE_NAMESPACE} vmhttp ${VM_IP} 8888

    Ensure you have added the istioctl client to your path, as described in the download page.

    • Deploy a pod running the sleep service in the Kubernetes cluster, and wait until it is ready:

    Zip

    1. $ kubectl apply -f @samples/sleep/sleep.yaml@
    2. $ kubectl get pod
    3. NAME READY STATUS RESTARTS AGE
    4. productpage-v1-8fcdcb496-xgkwg 2/2 Running 0 1d
    5. sleep-88ddbcfdd-rm42k 2/2 Running 0 1s
    6. ...
    • Send a request from the sleep service on the pod to the VM’s HTTP service:
    1. $ kubectl exec -it sleep-88ddbcfdd-rm42k -c sleep -- curl vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local:8888

    If configured properly, you will see something similar to the output below.

    1. <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
    2. <title>Directory listing for /</title>
    3. <body>
    4. <h2>Directory listing for /</h2>
    5. <hr>
    6. <ul>
    7. <li><a href=".bashrc">.bashrc</a></li>
    8. <li><a href=".ssh/">.ssh/</a></li>
    9. ...
    10. </body>

    Congratulations! You successfully configured a service running in a pod within the cluster tosend traffic to a service running on a VM outside of the cluster and tested thatthe configuration worked.

    Cleanup

    Run the following commands to remove the expansion VM from the mesh’s abstractmodel.

    1. $ istioctl deregister -n ${SERVICE_NAMESPACE} vmhttp ${VM_IP}
    2. 2019-02-21T22:12:22.023775Z info Deregistered service successfull
    3. $ kubectl delete ServiceEntry vmhttp -n ${SERVICE_NAMESPACE}
    4. serviceentry.networking.istio.io "vmhttp" deleted

    See also

    Bookinfo with a Virtual Machine

    Run the Bookinfo application with a MySQL service running on a virtual machine within your mesh.

    Virtual Machines in Single-Network Meshes

    Learn how to add a service running on a virtual machine to your single network Istio mesh.

    DNS Certificate Management

    Provision and manage DNS certificates in Istio.

    Secure Webhook Management

    A more secure way to manage Istio webhooks.

    Demystifying Istio's Sidecar Injection Model

    De-mystify how Istio manages to plugin its data-plane components into an existing deployment.

    Customizable Install with Helm

    Install and configure Istio for in-depth evaluation or production use.