• Single-network Mesh Expansion
    • Prerequisites
    • Installation steps
      • Preparing the Kubernetes cluster for expansion
      • Setting up the VM
    • Send requests from VM workloads to Kubernetes services
    • Running services on a mesh expansion machine
    • Cleanup
    • Troubleshooting
    • 相关内容

    Single-network Mesh Expansion

    This example provides instructions to integrate VMs and bare metal hosts intoan Istio mesh deployed on Kubernetes.

    Prerequisites

    • You have already set up Istio on Kubernetes. If you haven’t done so, you can find out how in the Installation guide.

    • Mesh expansion machines must have IP connectivity to the endpoints in the mesh. Thistypically requires a VPC or a VPN, as well as a container network thatprovides direct (without NAT or firewall deny) routing to the endpoints. The machineis not required to have access to the cluster IP addresses assigned by Kubernetes.

    • Mesh expansion VMs must have access to a DNS server that resolves names to cluster IP addresses. Optionsinclude exposing the Kubernetes DNS server through an internal load balancer, using a Core DNSserver, or configuring the IPs in any other DNS server accessible from the VM.

    • Install the Helm client. Helm is needed to enable mesh expansion.

    The following instructions:

    • Assume the expansion VM is running on GCE.
    • Use Google platform-specific commands for some steps.

    Installation steps

    Setup consists of preparing the mesh for expansion and installing and configuring each VM.

    Preparing the Kubernetes cluster for expansion

    The first step when adding non-Kubernetes services to an Istio mesh is to configure the Istio installation itself, andgenerate the configuration files that let mesh expansion VMs connect to the mesh. To prepare thecluster for mesh expansion, run the following commands on a machine with cluster admin privileges:

    • Ensure that mesh expansion is enabled for the cluster. If you didn’t usethe —set global.meshExpansion.enabled=true flag when installing Helm,you can use one of the following two options depending on how you originally installedIstio on the cluster:

      • If you installed Istio with Helm and Tiller, run helm upgrade with the new option:
    1. $ cd install/kubernetes/helm/istio
    2. $ helm upgrade --set global.meshExpansion.enabled=true istio .
    3. $ cd -
    • If you installed Istio without Helm and Tiller, use helm template to update your configuration with the option and reapply with kubectl:
    1. $ kubectl create namespace istio-system
    2. $ helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
    3. $ cd install/kubernetes/helm/istio
    4. $ helm template --set global.meshExpansion.enabled=true --namespace istio-system . > istio.yaml
    5. $ kubectl apply -f istio.yaml
    6. $ cd -

    When updating configuration with Helm, you can either set the option on the command line, as in our examples, or addit to a .yaml values file and pass it tothe command with —values, which is the recommended approach when managing configurations with multiple options. Youcan see some sample values files in your Istio installation’s install/kubernetes/helm/istio directory and find outmore about customizing Helm charts in the Helm documentation.

    • Define the namespace the VM joins. This example uses the SERVICE_NAMESPACE environment variable to store the namespace. The value of this variable must match the namespace you use in the configuration files later on.
    1. $ export SERVICE_NAMESPACE="default"
    • Determine and store the IP address of the Istio ingress gateway since the mesh expansion machines access Citadel and Pilot through this IP address.
    1. $ export GWIP=$(kubectl get -n istio-system service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    2. $ echo $GWIP
    3. 35.232.112.158
    • Generate a cluster.env configuration to deploy in the VMs. This file contains the Kubernetes cluster IP address rangesto intercept and redirect via Envoy. You specify the CIDR range when you install Kubernetes as servicesIpv4Cidr.Replace $MY_ZONE and $MY_PROJECT in the following example commands with the appropriate values to obtain the CIDRafter installation:
    1. $ ISTIO_SERVICE_CIDR=$(gcloud container clusters describe $K8S_CLUSTER --zone $MY_ZONE --project $MY_PROJECT --format "value(servicesIpv4Cidr)")
    2. $ echo -e "ISTIO_CP_AUTH=MUTUAL_TLS\nISTIO_SERVICE_CIDR=$ISTIO_SERVICE_CIDR\n" > cluster.env
    • Check the contents of the generated cluster.env file. It should be similar to the following example:
    1. $ cat cluster.env
    2. ISTIO_CP_AUTH=MUTUAL_TLS
    3. ISTIO_SERVICE_CIDR=10.55.240.0/20
    • If the VM only calls services in the mesh, you can skip this step. Otherwise, add the ports the VM exposesto the cluster.env file with the following command. You can change the ports later if necessary.
    1. $ echo "ISTIO_INBOUND_PORTS=3306,8080" >> cluster.env
    • Extract the initial keys the service account needs to use on the VMs.
    1. $ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
    2. -o jsonpath='{.data.root-cert\.pem}' |base64 --decode > root-cert.pem
    3. $ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
    4. -o jsonpath='{.data.key\.pem}' |base64 --decode > key.pem
    5. $ kubectl -n $SERVICE_NAMESPACE get secret istio.default \
    6. -o jsonpath='{.data.cert-chain\.pem}' |base64 --decode > cert-chain.pem

    Setting up the VM

    Next, run the following commands on each machine that you want to add to the mesh:

    • Copy the previously created cluster.env and *.pem files to the VM. For example:
    1. $ export GCE_NAME="your-gce-instance"
    2. $ gcloud compute scp --project=${MY_PROJECT} --zone=${MY_ZONE} {key.pem,cert-chain.pem,cluster.env,root-cert.pem} ${GCE_NAME}:~
    • Install the Debian package with the Envoy sidecar.
    1. $ gcloud compute ssh --project=${MY_PROJECT} --zone=${MY_ZONE} "${GCE_NAME}"
    2. $ curl -L https://storage.googleapis.com/istio-release/releases/1.4.0/deb/istio-sidecar.deb > istio-sidecar.deb
    3. $ sudo dpkg -i istio-sidecar.deb
    • Add the IP address of the Istio gateway to /etc/hosts. Revisit the preparing the cluster section to learn how to obtain the IP address.The following example updates the /etc/hosts file with the Istio gateway address:
    1. $ echo "35.232.112.158 istio-citadel istio-pilot istio-pilot.istio-system" | sudo tee -a /etc/hosts
    • Install root-cert.pem, key.pem and cert-chain.pem under /etc/certs/.
    1. $ sudo mkdir -p /etc/certs
    2. $ sudo cp {root-cert.pem,cert-chain.pem,key.pem} /etc/certs
    • Install cluster.env under /var/lib/istio/envoy/.
    1. $ sudo cp cluster.env /var/lib/istio/envoy
    • Transfer ownership of the files in /etc/certs/ and /var/lib/istio/envoy/ to the Istio proxy.
    1. $ sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy
    • Verify the node agent works:
    1. $ sudo node_agent
    2. ....
    3. CSR is approved successfully. Will renew cert in 1079h59m59.84568493s
    • Start Istio using systemctl.
    1. $ sudo systemctl start istio-auth-node-agent
    2. $ sudo systemctl start istio

    Send requests from VM workloads to Kubernetes services

    After setup, the machine can access services running in the Kubernetes clusteror on other mesh expansion machines.

    The following example shows accessing a service running in the Kubernetes cluster from a mesh expansion VM using/etc/hosts/, in this case using a service from the Bookinfo example.

    • First, on the cluster admin machine get the virtual IP address (clusterIP) for the service:
    1. $ kubectl get svc productpage -o jsonpath='{.spec.clusterIP}'
    2. 10.55.246.247
    • Then on the mesh expansion machine, add the service name and address to its etc/hosts file. You can then connect tothe cluster service from the VM, as in the example below:
    1. $ echo "10.55.246.247 productpage.default.svc.cluster.local" | sudo tee -a /etc/hosts
    2. $ curl -v productpage.default.svc.cluster.local:9080
    3. < HTTP/1.1 200 OK
    4. < content-type: text/html; charset=utf-8
    5. < content-length: 1836
    6. < server: envoy
    7. ... html content ...

    The server: envoy header indicates that the sidecar intercepted the traffic.

    Running services on a mesh expansion machine

    • Setup an HTTP server on the VM instance to serve HTTP traffic on port 8080:
    1. $ gcloud compute ssh ${GCE_NAME}
    2. $ python -m SimpleHTTPServer 8080
    • Determine the VM instance’s IP address. For example, find the IP addressof the GCE instance with the following commands:
    1. $ export GCE_IP=$(gcloud --format="value(networkInterfaces[0].networkIP)" compute instances describe ${GCE_NAME})
    2. $ echo ${GCE_IP}
    • Configure a service entry to enable service discovery for the VM. You can add VM services to the mesh using aservice entry. Service entries let you manually addadditional services to Pilot’s abstract model of the mesh. Once VM services are part of the mesh’s abstract model,other services can find and direct traffic to them. Each service entry configuration contains the IP addresses, ports,and appropriate labels of all VMs exposing a particular service, for example:
    1. $ kubectl -n ${SERVICE_NAMESPACE} apply -f - <<EOF
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: ServiceEntry
    4. metadata:
    5. name: vmhttp
    6. spec:
    7. hosts:
    8. - vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local
    9. ports:
    10. - number: 8080
    11. name: http
    12. protocol: HTTP
    13. resolution: STATIC
    14. endpoints:
    15. - address: ${GCE_IP}
    16. ports:
    17. http: 8080
    18. labels:
    19. app: vmhttp
    20. version: "v1"
    21. EOF
    • The workloads in a Kubernetes cluster need a DNS mapping to resolve the domain names of VM services. Tointegrate the mapping with your own DNS system, use istioctl register and creates a Kubernetes selector-lessservice, for example:
    1. $ istioctl register -n ${SERVICE_NAMESPACE} vmhttp ${GCE_IP} 8080

    Make sure you have already added the istioctl client to your path, as described in the download page.

    • Deploy a pod running the sleep service in the Kubernetes cluster, and wait until it is ready:

    Zip

    1. $ kubectl apply -f @samples/sleep/sleep.yaml@
    2. $ kubectl get pod
    3. NAME READY STATUS RESTARTS AGE
    4. productpage-v1-8fcdcb496-xgkwg 2/2 Running 0 1d
    5. sleep-88ddbcfdd-rm42k 2/2 Running 0 1s
    6. ...
    • Send a request from the sleep service on the pod to the VM’s HTTP service:
    1. $ kubectl exec -it sleep-88ddbcfdd-rm42k -c sleep -- curl vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local:8080

    You should see something similar to the output below.

    1. <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
    2. <title>Directory listing for /</title>
    3. <body>
    4. <h2>Directory listing for /</h2>
    5. <hr>
    6. <ul>
    7. <li><a href=".bashrc">.bashrc</a></li>
    8. <li><a href=".ssh/">.ssh/</a></li>
    9. ...
    10. </body>

    Congratulations! You successfully configured a service running in a pod within the cluster tosend traffic to a service running on a VM outside of the cluster and tested thatthe configuration worked.

    Cleanup

    Run the following commands to remove the expansion VM from the mesh’s abstractmodel.

    1. $ istioctl deregister -n ${SERVICE_NAMESPACE} vmhttp ${GCE_IP}
    2. 2019-02-21T22:12:22.023775Z info Deregistered service successfull
    3. $ kubectl delete ServiceEntry vmhttp -n ${SERVICE_NAMESPACE}
    4. serviceentry.networking.istio.io "vmhttp" deleted

    Troubleshooting

    The following are some basic troubleshooting steps for common mesh expansion issues.

    • When making requests from a VM to the cluster, ensure you don’t run the requests as root oristio-proxy user. By default, Istio excludes both users from interception.

    • Verify the machine can reach the IP of the all workloads running in the cluster. For example:

    1. $ kubectl get endpoints productpage -o jsonpath='{.subsets[0].addresses[0].ip}'
    2. 10.52.39.13
    1. $ curl 10.52.39.13:9080
    2. html output
    • Check the status of the node agent and sidecar:
    1. $ sudo systemctl status istio-auth-node-agent
    2. $ sudo systemctl status istio
    • Check that the processes are running. The following is an example of the processes you should see on the VM if you runps, filtered for istio:
    1. $ ps aux | grep istio
    2. root 6941 0.0 0.2 75392 16820 ? Ssl 21:32 0:00 /usr/local/istio/bin/node_agent --logtostderr
    3. root 6955 0.0 0.0 49344 3048 ? Ss 21:32 0:00 su -s /bin/bash -c INSTANCE_IP=10.150.0.5 POD_NAME=demo-vm-1 POD_NAMESPACE=default exec /usr/local/bin/pilot-agent proxy > /var/log/istio/istio.log istio-proxy
    4. istio-p+ 7016 0.0 0.1 215172 12096 ? Ssl 21:32 0:00 /usr/local/bin/pilot-agent proxy
    5. istio-p+ 7094 4.0 0.3 69540 24800 ? Sl 21:32 0:37 /usr/local/bin/envoy -c /etc/istio/proxy/envoy-rev1.json --restart-epoch 1 --drain-time-s 2 --parent-shutdown-time-s 3 --service-cluster istio-proxy --service-node sidecar~10.150.0.5~demo-vm-1.default~default.svc.cluster.local
    • Check the Envoy access and error logs:
    1. $ tail /var/log/istio/istio.log
    2. $ tail /var/log/istio/istio.err.log

    相关内容

    Multi-network Mesh Expansion

    Integrate VMs and bare metal hosts into an Istio mesh deployed on Kubernetes with gateways.

    Demystifying Istio's Sidecar Injection Model

    De-mystify how Istio manages to plugin its data-plane components into an existing deployment.

    Bookinfo with Mesh Expansion

    Illustrates how to expand the Bookinfo application's mesh with a raw VM service.

    Diagnose your Configuration with Istioctl Analyze

    Shows you how to use istioctl analyze to identify potential issues with your configuration.

    Docker Desktop

    在 Docker Desktop 中运行 Istio 的设置说明。

    Getting Started

    Download, install, and try out Istio.