• Circuit Breaking
    • Before you begin
    • Configuring the circuit breaker
    • Adding a client
    • Tripping the circuit breaker
    • Cleaning up
    • See also

    Circuit Breaking

    This task shows you how to configure circuit breaking for connections, requests,and outlier detection.

    Circuit breaking is an important pattern for creating resilient microserviceapplications. Circuit breaking allows you to write applications that limit the impact of failures, latency spikes, and other undesirable effects of network peculiarities.

    In this task, you will configure circuit breaking rules and then test theconfiguration by intentionally “tripping” the circuit breaker.

    Before you begin

    • Setup Istio by following the instructions in theInstallation guide.
    • Start the httpbin sample.

    If you have enabled automatic sidecar injection, deploy the httpbin service:

    Zip

    1. $ kubectl apply -f @samples/httpbin/httpbin.yaml@

    Otherwise, you have to manually inject the sidecar before deploying the httpbin application:

    Zip

    1. $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@)

    The httpbin application serves as the backend service for this task.

    Configuring the circuit breaker

    • Create a destination rule to apply circuit breaking settingswhen calling the httpbin service:

    If you installed/configured Istio with mutual TLS authentication enabled, you must add a TLS traffic policy mode: ISTIO_MUTUAL to the DestinationRule before applying it.Otherwise requests will generate 503 errors as described here.

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: DestinationRule
    4. metadata:
    5. name: httpbin
    6. spec:
    7. host: httpbin
    8. trafficPolicy:
    9. connectionPool:
    10. tcp:
    11. maxConnections: 1
    12. http:
    13. http1MaxPendingRequests: 1
    14. maxRequestsPerConnection: 1
    15. outlierDetection:
    16. consecutiveErrors: 1
    17. interval: 1s
    18. baseEjectionTime: 3m
    19. maxEjectionPercent: 100
    20. EOF
    • Verify the destination rule was created correctly:
    1. $ kubectl get destinationrule httpbin -o yaml
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: DestinationRule
    4. metadata:
    5. name: httpbin
    6. ...
    7. spec:
    8. host: httpbin
    9. trafficPolicy:
    10. connectionPool:
    11. http:
    12. http1MaxPendingRequests: 1
    13. maxRequestsPerConnection: 1
    14. tcp:
    15. maxConnections: 1
    16. outlierDetection:
    17. baseEjectionTime: 180.000s
    18. consecutiveErrors: 1
    19. interval: 1.000s
    20. maxEjectionPercent: 100

    Adding a client

    Create a client to send traffic to the httpbin service. The client isa simple load-testing client called fortio.Fortio lets you control the number of connections, concurrency, anddelays for outgoing HTTP calls. You will use this client to “trip” the circuit breakerpolicies you set in the DestinationRule.

    • Inject the client with the Istio sidecar proxy so network interactions aregoverned by Istio:

    Zip

    1. $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/sample-client/fortio-deploy.yaml@)
    • Log in to the client pod and use the fortio tool to call httpbin.Pass in -curl to indicate that you just want to make one call:
    1. $ FORTIO_POD=$(kubectl get pod | grep fortio | awk '{ print $1 }')
    2. $ kubectl exec -it $FORTIO_POD -c fortio /usr/bin/fortio -- load -curl http://httpbin:8000/get
    3. HTTP/1.1 200 OK
    4. server: envoy
    5. date: Tue, 16 Jan 2018 23:47:00 GMT
    6. content-type: application/json
    7. access-control-allow-origin: *
    8. access-control-allow-credentials: true
    9. content-length: 445
    10. x-envoy-upstream-service-time: 36
    11. {
    12. "args": {},
    13. "headers": {
    14. "Content-Length": "0",
    15. "Host": "httpbin:8000",
    16. "User-Agent": "istio/fortio-0.6.2",
    17. "X-B3-Sampled": "1",
    18. "X-B3-Spanid": "824fbd828d809bf4",
    19. "X-B3-Traceid": "824fbd828d809bf4",
    20. "X-Ot-Span-Context": "824fbd828d809bf4;824fbd828d809bf4;0000000000000000",
    21. "X-Request-Id": "1ad2de20-806e-9622-949a-bd1d9735a3f4"
    22. },
    23. "origin": "127.0.0.1",
    24. "url": "http://httpbin:8000/get"
    25. }

    You can see the request succeeded! Now, it’s time to break something.

    Tripping the circuit breaker

    In the DestinationRule settings, you specified maxConnections: 1 andhttp1MaxPendingRequests: 1. These rules indicate that if you exceed more thanone connection and request concurrently, you should see some failures when theistio-proxy opens the circuit for further requests and connections.

    • Call the service with two concurrent connections (-c 2) and send 20 requests(-n 20):
    1. $ kubectl exec -it $FORTIO_POD -c fortio /usr/bin/fortio -- load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
    2. Fortio 0.6.2 running at 0 queries per second, 2->2 procs, for 5s: http://httpbin:8000/get
    3. Starting at max qps with 2 thread(s) [gomax 2] for exactly 20 calls (10 per thread + 0)
    4. 23:51:10 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
    5. Ended after 106.474079ms : 20 calls. qps=187.84
    6. Aggregated Function Time : count 20 avg 0.010215375 +/- 0.003604 min 0.005172024 max 0.019434859 sum 0.204307492
    7. # range, mid point, percentile, count
    8. >= 0.00517202 <= 0.006 , 0.00558601 , 5.00, 1
    9. > 0.006 <= 0.007 , 0.0065 , 20.00, 3
    10. > 0.007 <= 0.008 , 0.0075 , 30.00, 2
    11. > 0.008 <= 0.009 , 0.0085 , 40.00, 2
    12. > 0.009 <= 0.01 , 0.0095 , 60.00, 4
    13. > 0.01 <= 0.011 , 0.0105 , 70.00, 2
    14. > 0.011 <= 0.012 , 0.0115 , 75.00, 1
    15. > 0.012 <= 0.014 , 0.013 , 90.00, 3
    16. > 0.016 <= 0.018 , 0.017 , 95.00, 1
    17. > 0.018 <= 0.0194349 , 0.0187174 , 100.00, 1
    18. # target 50% 0.0095
    19. # target 75% 0.012
    20. # target 99% 0.0191479
    21. # target 99.9% 0.0194062
    22. Code 200 : 19 (95.0 %)
    23. Code 503 : 1 (5.0 %)
    24. Response Header Sizes : count 20 avg 218.85 +/- 50.21 min 0 max 231 sum 4377
    25. Response Body/Total Sizes : count 20 avg 652.45 +/- 99.9 min 217 max 676 sum 13049
    26. All done 20 calls (plus 0 warmup) 10.215 ms avg, 187.8 qps

    It’s interesting to see that almost all requests made it through! The istio-proxydoes allow for some leeway.

    1. Code 200 : 19 (95.0 %)
    2. Code 503 : 1 (5.0 %)
    • Bring the number of concurrent connections up to 3:
    1. $ kubectl exec -it $FORTIO_POD -c fortio /usr/bin/fortio -- load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get
    2. Fortio 0.6.2 running at 0 queries per second, 2->2 procs, for 5s: http://httpbin:8000/get
    3. Starting at max qps with 3 thread(s) [gomax 2] for exactly 30 calls (10 per thread + 0)
    4. 23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
    5. 23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
    6. 23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
    7. 23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
    8. 23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
    9. 23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
    10. 23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
    11. 23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
    12. 23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
    13. 23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
    14. 23:51:51 W http.go:617> Parsed non ok code 503 (HTTP/1.1 503)
    15. Ended after 71.05365ms : 30 calls. qps=422.22
    16. Aggregated Function Time : count 30 avg 0.0053360199 +/- 0.004219 min 0.000487853 max 0.018906468 sum 0.160080597
    17. # range, mid point, percentile, count
    18. >= 0.000487853 <= 0.001 , 0.000743926 , 10.00, 3
    19. > 0.001 <= 0.002 , 0.0015 , 30.00, 6
    20. > 0.002 <= 0.003 , 0.0025 , 33.33, 1
    21. > 0.003 <= 0.004 , 0.0035 , 40.00, 2
    22. > 0.004 <= 0.005 , 0.0045 , 46.67, 2
    23. > 0.005 <= 0.006 , 0.0055 , 60.00, 4
    24. > 0.006 <= 0.007 , 0.0065 , 73.33, 4
    25. > 0.007 <= 0.008 , 0.0075 , 80.00, 2
    26. > 0.008 <= 0.009 , 0.0085 , 86.67, 2
    27. > 0.009 <= 0.01 , 0.0095 , 93.33, 2
    28. > 0.014 <= 0.016 , 0.015 , 96.67, 1
    29. > 0.018 <= 0.0189065 , 0.0184532 , 100.00, 1
    30. # target 50% 0.00525
    31. # target 75% 0.00725
    32. # target 99% 0.0186345
    33. # target 99.9% 0.0188793
    34. Code 200 : 19 (63.3 %)
    35. Code 503 : 11 (36.7 %)
    36. Response Header Sizes : count 30 avg 145.73333 +/- 110.9 min 0 max 231 sum 4372
    37. Response Body/Total Sizes : count 30 avg 507.13333 +/- 220.8 min 217 max 676 sum 15214
    38. All done 30 calls (plus 0 warmup) 5.336 ms avg, 422.2 qps

    Now you start to see the expected circuit breaking behavior. Only 63.3% of therequests succeeded and the rest were trapped by circuit breaking:

    1. Code 200 : 19 (63.3 %)
    2. Code 503 : 11 (36.7 %)
    • Query the istio-proxy stats to see more:
    1. $ kubectl exec $FORTIO_POD -c istio-proxy -- pilot-agent request GET stats | grep httpbin | grep pending
    2. cluster.outbound|80||httpbin.springistio.svc.cluster.local.upstream_rq_pending_active: 0
    3. cluster.outbound|80||httpbin.springistio.svc.cluster.local.upstream_rq_pending_failure_eject: 0
    4. cluster.outbound|80||httpbin.springistio.svc.cluster.local.upstream_rq_pending_overflow: 12
    5. cluster.outbound|80||httpbin.springistio.svc.cluster.local.upstream_rq_pending_total: 39

    You can see 12 for the upstream_rq_pending_overflow value which means 12calls so far have been flagged for circuit breaking.

    Cleaning up

    • Remove the rules:
    1. $ kubectl delete destinationrule httpbin
    • Shutdown the httpbin service and client:
    1. $ kubectl delete deploy httpbin fortio-deploy
    2. $ kubectl delete svc httpbin

    See also

    Istio as a Proxy for External Services

    Configure Istio ingress gateway to act as a proxy for external services.

    Multi-Mesh Deployments for Isolation and Boundary Protection

    Deploy environments that require isolation into separate meshes and enable inter-mesh communication by mesh federation.

    Secure Control of Egress Traffic in Istio, part 3

    Comparison of alternative solutions to control egress traffic including performance considerations.

    Secure Control of Egress Traffic in Istio, part 2

    Use Istio Egress Traffic Control to prevent attacks involving egress traffic.

    Secure Control of Egress Traffic in Istio, part 1

    Attacks involving egress traffic and requirements for egress traffic control.

    Version Routing in a Multicluster Service Mesh

    Configuring Istio route rules in a multicluster service mesh.