• Deployments
    • Use Case
    • Creating a Deployment
      • Pod-template-hash label
    • Updating a Deployment
      • Rollover (aka multiple updates in-flight)
      • Label selector updates
    • Rolling Back a Deployment
      • Checking Rollout History of a Deployment
      • Rolling Back to a Previous Revision
    • Scaling a Deployment
      • Proportional scaling
    • Pausing and Resuming a Deployment
    • Deployment status
      • Progressing Deployment
      • Complete Deployment
      • Failed Deployment
      • Operating on a failed deployment
    • Clean up Policy
    • Use Cases
      • Canary Deployment
    • Writing a Deployment Spec
      • Pod Template
      • Replicas
      • Selector
      • Strategy
        • Recreate Deployment
        • Rolling Update Deployment
          • Max Unavailable
          • Max Surge
      • Progress Deadline Seconds
      • Min Ready Seconds
      • Rollback To
        • Revision
      • Revision History Limit
      • Paused
    • Alternative to Deployments
      • kubectl rolling update
    • 反馈

    Deployments

    A Deployment controller provides declarative updates for Pods andReplicaSets.

    You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

    注意: You should not manage ReplicaSets owned by a Deployment. All the use cases should be covered by manipulating the Deployment object. Consider opening an issue in the main Kubernetes repository if your use case is not covered below.

    Use Case

    The following are typical use cases for Deployments:

    • Create a Deployment to rollout a ReplicaSet. The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not.
    • Declare the new state of the Pods by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.
    • Rollback to an earlier Deployment revision if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment.
    • Scale up the Deployment to facilitate more load.
    • Pause the Deployment to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout.
    • Use the status of the Deployment as an indicator that a rollout has stuck
    • Clean up older ReplicaSets that you don’t need anymore

    Creating a Deployment

    Here is an example Deployment. It creates a ReplicaSet to bring up three nginx Pods.

    controllers/nginx-deployment.yamlDeployments - 图1
    1. apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80

    Run the example by downloading the example file and then running this command:

    1. kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml

    Setting the kubectl flag —record to true allows you to record current command in the annotations ofthe resources being created or updated. It is useful for future introspection: for example, to see thecommands executed in each Deployment revision.

    Then running get immediately will give:

    1. $ kubectl get deployments
    2. NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
    3. nginx-deployment 3 0 0 0 1s

    This indicates that the Deployment’s number of desired replicas is 3 (according to deployment’s .spec.replicas),the number of current replicas (.status.replicas) is 0, the number of up-to-date replicas (.status.updatedReplicas)is 0, and the number of available replicas (.status.availableReplicas) is also 0.

    To see the Deployment rollout status, run:

    1. $ kubectl rollout status deployment/nginx-deployment
    2. Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
    3. deployment "nginx-deployment" successfully rolled out

    Running the get again a few seconds later should give:

    1. $ kubectl get deployments
    2. NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
    3. nginx-deployment 3 3 3 3 18s

    This indicates that the Deployment has created all three replicas, and all replicas are up-to-date (contains thelatest pod template) and available (pod status is ready for at least Deployment’s .spec.minReadySeconds). Runningkubectl get rs and kubectl get pods will show the ReplicaSet (RS) and Pods created.

    1. $ kubectl get rs
    2. NAME DESIRED CURRENT READY AGE
    3. nginx-deployment-2035384211 3 3 3 18s

    You may notice that the name of the ReplicaSet is always <the name of the Deployment>-<hash value of the pod template>.

    1. $ kubectl get pods --show-labels
    2. NAME READY STATUS RESTARTS AGE LABELS
    3. nginx-deployment-2035384211-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
    4. nginx-deployment-2035384211-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
    5. nginx-deployment-2035384211-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211

    The created ReplicaSet ensures that there are three nginx Pods at all times.

    注意: You must specify an appropriate selector and pod template labels in a Deployment (in this case,app = nginx). That is, don’t overlap with other controllers (including other Deployments, ReplicaSets,StatefulSets, etc.). Kubernetes doesn’t stop you from overlapping, and if multiplecontrollers have overlapping selectors, those controllers may fight with each other and won’t behavecorrectly.

    Pod-template-hash label

    注意: Do not change this label.

    Note the pod-template-hash label in the example output in the pod labels above. This label is added by theDeployment controller to every ReplicaSet that a Deployment creates or adopts. Its purpose is to make sure that childReplicaSets of a Deployment do not overlap. It is computed by hashing the PodTemplate of the ReplicaSetand using the resulting hash as the label value that will be added in the ReplicaSet selector, pod template labels,and in any existing Pods that the ReplicaSet may have.

    Updating a Deployment

    注意: A Deployment’s rollout is triggered if and only if the Deployment’s pod template (that is, .spec.template)is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout.

    Suppose that we now want to update the nginx Pods to use the nginx:1.9.1 imageinstead of the nginx:1.7.9 image.

    1. $ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
    2. deployment "nginx-deployment" image updated

    Alternatively, we can edit the Deployment and change .spec.template.spec.containers[0].image from nginx:1.7.9 to nginx:1.9.1:

    1. $ kubectl edit deployment/nginx-deployment
    2. deployment "nginx-deployment" edited

    To see the rollout status, run:

    1. $ kubectl rollout status deployment/nginx-deployment
    2. Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
    3. deployment "nginx-deployment" successfully rolled out

    After the rollout succeeds, you may want to get the Deployment:

    1. $ kubectl get deployments
    2. NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
    3. nginx-deployment 3 3 3 3 36s

    The number of up-to-date replicas indicates that the Deployment has updated the replicas to the latest configuration.The current replicas indicates the total replicas this Deployment manages, and the available replicas indicates thenumber of current replicas that are available.

    We can run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling itup to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.

    1. $ kubectl get rs
    2. NAME DESIRED CURRENT READY AGE
    3. nginx-deployment-1564180365 3 3 3 6s
    4. nginx-deployment-2035384211 0 0 0 36s

    Running get pods should now show only the new Pods:

    1. $ kubectl get pods
    2. NAME READY STATUS RESTARTS AGE
    3. nginx-deployment-1564180365-khku8 1/1 Running 0 14s
    4. nginx-deployment-1564180365-nacti 1/1 Running 0 14s
    5. nginx-deployment-1564180365-z9gth 1/1 Running 0 14s

    Next time we want to update these Pods, we only need to update the Deployment’s pod template again.

    Deployment can ensure that only a certain number of Pods may be down while they are being updated. Bydefault, it ensures that at least 1 less than the desired number of Pods are up (1 max unavailable).

    Deployment can also ensure that only a certain number of Pods may be created above the desired number ofPods. By default, it ensures that at most 1 more than the desired number of Pods are up (1 max surge).

    In a future version of Kubernetes, the defaults will change from 1-1 to 25%-25%.

    For example, if you look at the above Deployment closely, you will see that it first created a new Pod,then deleted some old Pods and created new ones. It does not kill old Pods until a sufficient number ofnew Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed.It makes sure that the number of available Pods is at least 2 and the number of total Pods is at most 4.

    1. $ kubectl describe deployments
    2. Name: nginx-deployment
    3. Namespace: default
    4. CreationTimestamp: Tue, 15 Mar 2016 12:01:06 -0700
    5. Labels: app=nginx
    6. Selector: app=nginx
    7. Replicas: 3 updated | 3 total | 3 available | 0 unavailable
    8. StrategyType: RollingUpdate
    9. MinReadySeconds: 0
    10. RollingUpdateStrategy: 1 max unavailable, 1 max surge
    11. OldReplicaSets: <none>
    12. NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)
    13. Events:
    14. FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    15. --------- -------- ----- ---- ------------- -------- ------ -------
    16. 36s 36s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
    17. 23s 23s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
    18. 23s 23s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
    19. 23s 23s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
    20. 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
    21. 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3

    Here we see that when we first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211)and scaled it up to 3 replicas directly. When we updated the Deployment, it created a new ReplicaSet(nginx-deployment-1564180365) and scaled it up to 1 and then scaled down the old ReplicaSet to 2, so that atleast 2 Pods were available and at most 4 Pods were created at all times. It then continued scaling up and downthe new and the old ReplicaSet, with the same rolling update strategy. Finally, we’ll have 3 available replicasin the new ReplicaSet, and the old ReplicaSet is scaled down to 0.

    Rollover (aka multiple updates in-flight)

    Each time a new deployment object is observed by the deployment controller, a ReplicaSet is created to bring upthe desired Pods if there is no existing ReplicaSet doing so. Existing ReplicaSet controlling Pods whose labelsmatch .spec.selector but whose template does not match .spec.template are scaled down. Eventually, the newReplicaSet will be scaled to .spec.replicas and all old ReplicaSets will be scaled to 0.

    If you update a Deployment while an existing rollout is in progress, the Deployment will create a new ReplicaSetas per the update and start scaling that up, and will roll over the ReplicaSet that it was scaling up previously– it will add it to its list of old ReplicaSets and will start scaling it down.

    For example, suppose you create a Deployment to create 5 replicas of nginx:1.7.9,but then updates the Deployment to create 5 replicas of nginx:1.9.1, when only 3replicas of nginx:1.7.9 had been created. In that case, Deployment will immediately startkilling the 3 nginx:1.7.9 Pods that it had created, and will start creatingnginx:1.9.1 Pods. It will not wait for 5 replicas of nginx:1.7.9 to be createdbefore changing course.

    Label selector updates

    It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front.In any case, if you need to perform a label selector update, exercise great caution and make sure you have graspedall of the implications.

    • Selector additions require the pod template labels in the Deployment spec to be updated with the new label too,otherwise a validation error is returned. This change is a non-overlapping one, meaning that the new selector doesnot select ReplicaSets and Pods created with the old selector, resulting in orphaning all old ReplicaSets andcreating a new ReplicaSet.
    • Selector updates – that is, changing the existing value in a selector key – result in the same behavior as additions.
    • Selector removals – that is, removing an existing key from the Deployment selector – do not require any changes in thepod template labels. No existing ReplicaSet is orphaned, and a new ReplicaSet is not created, but note that theremoved label still exists in any existing Pods and ReplicaSets.

    Rolling Back a Deployment

    Sometimes you may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping.By default, all of the Deployment’s rollout history is kept in the system so that you can rollback anytime you want(you can change that by modifying revision history limit).

    注意: A Deployment’s revision is created when a Deployment’s rollout is triggered. This means that thenew revision is created if and only if the Deployment’s pod template (.spec.template) is changed,for example if you update the labels or container images of the template. Other updates, such as scaling the Deployment,do not create a Deployment revision, so that we can facilitate simultaneous manual- or auto-scaling.This means that when you roll back to an earlier revision, only the Deployment’s pod template part isrolled back.

    Suppose that we made a typo while updating the Deployment, by putting the image name as nginx:1.91 instead of nginx:1.9.1:

    1. $ kubectl set image deployment/nginx-deployment nginx=nginx:1.91
    2. deployment "nginx-deployment" image updated

    The rollout will be stuck.

    1. $ kubectl rollout status deployments nginx-deployment
    2. Waiting for rollout to finish: 2 out of 3 new replicas have been updated...

    Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts,read more here.

    You will also see that both the number of old replicas (nginx-deployment-1564180365 andnginx-deployment-2035384211) and new replicas (nginx-deployment-3066724191) are 2.

    1. $ kubectl get rs
    2. NAME DESIRED CURRENT READY AGE
    3. nginx-deployment-1564180365 2 2 0 25s
    4. nginx-deployment-2035384211 0 0 0 36s
    5. nginx-deployment-3066724191 2 2 2 6s

    Looking at the Pods created, you will see that the 2 Pods created by new ReplicaSet are stuck in an image pull loop.

    1. $ kubectl get pods
    2. NAME READY STATUS RESTARTS AGE
    3. nginx-deployment-1564180365-70iae 1/1 Running 0 25s
    4. nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s
    5. nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s
    6. nginx-deployment-3066724191-eocby 0/1 ImagePullBackOff 0 6s
    注意: The Deployment controller will stop the bad rollout automatically, and will stop scaling up the newReplicaSet. This depends on the rollingUpdate parameters (maxUnavailable specifically) that you have specified.Kubernetes by default sets the value to 1 and spec.replicas to 1 so if you haven’t cared about setting thoseparameters, your Deployment can have 100% unavailability by default! This will be fixed in Kubernetes in a futureversion.
    1. $ kubectl describe deployment
    2. Name: nginx-deployment
    3. Namespace: default
    4. CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
    5. Labels: app=nginx
    6. Selector: app=nginx
    7. Replicas: 2 updated | 3 total | 2 available | 2 unavailable
    8. StrategyType: RollingUpdate
    9. MinReadySeconds: 0
    10. RollingUpdateStrategy: 1 max unavailable, 1 max surge
    11. OldReplicaSets: nginx-deployment-1564180365 (2/2 replicas created)
    12. NewReplicaSet: nginx-deployment-3066724191 (2/2 replicas created)
    13. Events:
    14. FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    15. --------- -------- ----- ---- ------------- -------- ------ -------
    16. 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
    17. 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
    18. 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
    19. 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
    20. 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
    21. 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
    22. 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1
    23. 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-1564180365 to 2
    24. 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 2

    To fix this, we need to rollback to a previous revision of Deployment that is stable.

    Checking Rollout History of a Deployment

    First, check the revisions of this deployment:

    1. $ kubectl rollout history deployment/nginx-deployment
    2. deployments "nginx-deployment"
    3. REVISION CHANGE-CAUSE
    4. 1 kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
    5. 2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
    6. 3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91

    Because we recorded the command while creating this Deployment using —record, we can easily seethe changes we made in each revision.

    To further see the details of each revision, run:

    1. $ kubectl rollout history deployment/nginx-deployment --revision=2
    2. deployments "nginx-deployment" revision 2
    3. Labels: app=nginx
    4. pod-template-hash=1159050644
    5. Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
    6. Containers:
    7. nginx:
    8. Image: nginx:1.9.1
    9. Port: 80/TCP
    10. QoS Tier:
    11. cpu: BestEffort
    12. memory: BestEffort
    13. Environment Variables: <none>
    14. No volumes.

    Rolling Back to a Previous Revision

    Now we’ve decided to undo the current rollout and rollback to the previous revision:

    1. $ kubectl rollout undo deployment/nginx-deployment
    2. deployment "nginx-deployment" rolled back

    Alternatively, you can rollback to a specific revision by specifying it with —to-revision:

    1. $ kubectl rollout undo deployment/nginx-deployment --to-revision=2
    2. deployment "nginx-deployment" rolled back

    For more details about rollout related commands, read kubectl rollout.

    The Deployment is now rolled back to a previous stable revision. As you can see, a DeploymentRollback eventfor rolling back to revision 2 is generated from Deployment controller.

    1. $ kubectl get deployment
    2. NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
    3. nginx-deployment 3 3 3 3 30m
    4. $ kubectl describe deployment
    5. Name: nginx-deployment
    6. Namespace: default
    7. CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
    8. Labels: app=nginx
    9. Selector: app=nginx
    10. Replicas: 3 updated | 3 total | 3 available | 0 unavailable
    11. StrategyType: RollingUpdate
    12. MinReadySeconds: 0
    13. RollingUpdateStrategy: 1 max unavailable, 1 max surge
    14. OldReplicaSets: <none>
    15. NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)
    16. Events:
    17. FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    18. --------- -------- ----- ---- ------------- -------- ------ -------
    19. 30m 30m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
    20. 29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
    21. 29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
    22. 29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
    23. 29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
    24. 29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 2
    25. 29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1
    26. 29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-1564180365 to 2
    27. 2m 2m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-3066724191 to 0
    28. 2m 2m 1 {deployment-controller } Normal DeploymentRollback Rolled back deployment "nginx-deployment" to revision 2
    29. 29m 2m 2 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3

    Scaling a Deployment

    You can scale a Deployment by using the following command:

    1. $ kubectl scale deployment nginx-deployment --replicas=10
    2. deployment "nginx-deployment" scaled

    Assuming horizontal pod autoscaling is enabledin your cluster, you can setup an autoscaler for your Deployment and choose the minimum and maximum number ofPods you want to run based on the CPU utilization of your existing Pods.

    1. $ kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
    2. deployment "nginx-deployment" autoscaled

    Proportional scaling

    RollingUpdate Deployments support running multiple versions of an application at the same time. When youor an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progressor paused), then the Deployment controller will balance the additional replicas in the existing activeReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called proportional scaling.

    For example, you are running a Deployment with 10 replicas, maxSurge=3, and maxUnavailable=2.

    1. $ kubectl get deploy
    2. NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
    3. nginx-deployment 10 10 10 10 50s

    You update to a new image which happens to be unresolvable from inside the cluster.

    1. $ kubectl set image deploy/nginx-deployment nginx=nginx:sometag
    2. deployment "nginx-deployment" image updated

    The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it’s blocked due to themaxUnavailable requirement that we mentioned above.

    1. $ kubectl get rs
    2. NAME DESIRED CURRENT READY AGE
    3. nginx-deployment-1989198191 5 5 0 9s
    4. nginx-deployment-618515232 8 8 8 1m

    Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicasto 15. The Deployment controller needs to decide where to add these new 5 replicas. If we weren’t usingproportional scaling, all 5 of them would be added in the new ReplicaSet. With proportional scaling, wespread the additional replicas across all ReplicaSets. Bigger proportions go to the ReplicaSets with themost replicas and lower proportions go to ReplicaSets with less replicas. Any leftovers are added to theReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled up.

    In our example above, 3 replicas will be added to the old ReplicaSet and 2 replicas will be added to thenew ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assumingthe new replicas become healthy.

    1. $ kubectl get deploy
    2. NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
    3. nginx-deployment 15 18 7 8 7m
    4. $ kubectl get rs
    5. NAME DESIRED CURRENT READY AGE
    6. nginx-deployment-1989198191 7 7 0 7m
    7. nginx-deployment-618515232 11 11 11 7m

    Pausing and Resuming a Deployment

    You can pause a Deployment before triggering one or more updates and then resume it. This will allow you toapply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.

    For example, with a Deployment that was just created:

    1. $ kubectl get deploy
    2. NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
    3. nginx 3 3 3 3 1m
    4. $ kubectl get rs
    5. NAME DESIRED CURRENT READY AGE
    6. nginx-2142116321 3 3 3 1m

    Pause by running the following command:

    1. $ kubectl rollout pause deployment/nginx-deployment
    2. deployment "nginx-deployment" paused

    Then update the image of the Deployment:

    1. $ kubectl set image deploy/nginx-deployment nginx=nginx:1.9.1
    2. deployment "nginx-deployment" image updated

    Notice that no new rollout started:

    1. $ kubectl rollout history deploy/nginx-deployment
    2. deployments "nginx"
    3. REVISION CHANGE-CAUSE
    4. 1 <none>
    5. $ kubectl get rs
    6. NAME DESIRED CURRENT READY AGE
    7. nginx-2142116321 3 3 3 2m

    You can make as many updates as you wish, for example, update the resources that will be used:

    1. $ kubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi
    2. deployment "nginx" resource requirements updated

    The initial state of the Deployment prior to pausing it will continue its function, but new updates tothe Deployment will not have any effect as long as the Deployment is paused.

    Eventually, resume the Deployment and observe a new ReplicaSet coming up with all the new updates:

    1. $ kubectl rollout resume deploy/nginx-deployment
    2. deployment "nginx" resumed
    3. $ kubectl get rs -w
    4. NAME DESIRED CURRENT READY AGE
    5. nginx-2142116321 2 2 2 2m
    6. nginx-3926361531 2 2 0 6s
    7. nginx-3926361531 2 2 1 18s
    8. nginx-2142116321 1 2 2 2m
    9. nginx-2142116321 1 2 2 2m
    10. nginx-3926361531 3 2 1 18s
    11. nginx-3926361531 3 2 1 18s
    12. nginx-2142116321 1 1 1 2m
    13. nginx-3926361531 3 3 1 18s
    14. nginx-3926361531 3 3 2 19s
    15. nginx-2142116321 0 1 1 2m
    16. nginx-2142116321 0 1 1 2m
    17. nginx-2142116321 0 0 0 2m
    18. nginx-3926361531 3 3 3 20s
    19. ^C
    20. $ kubectl get rs
    21. NAME DESIRED CURRENT READY AGE
    22. nginx-2142116321 0 0 0 2m
    23. nginx-3926361531 3 3 3 28s
    注意: You cannot rollback a paused Deployment until you resume it.

    Deployment status

    A Deployment enters various states during its lifecycle. It can be progressing whilerolling out a new ReplicaSet, it can be complete, or it can fail to progress.

    Progressing Deployment

    Kubernetes marks a Deployment as progressing when one of the following tasks is performed:

    • The Deployment creates a new ReplicaSet.
    • The Deployment is scaling up its newest ReplicaSet.
    • The Deployment is scaling down its older ReplicaSet(s).
    • New Pods become ready or available (ready for at least MinReadySeconds).

    You can monitor the progress for a Deployment by using kubectl rollout status.

    Complete Deployment

    Kubernetes marks a Deployment as complete when it has the following characteristics:

    • All of the replicas associated with the Deployment have been updated to the latest version you’ve specified, meaning anyupdates you’ve requested have been completed.
    • All of the replicas associated with the Deployment are available.
    • No old replicas for the Deployment are running.

    You can check if a Deployment has completed by using kubectl rollout status. If the rollout completedsuccessfully, kubectl rollout status returns a zero exit code.

    1. $ kubectl rollout status deploy/nginx-deployment
    2. Waiting for rollout to finish: 2 of 3 updated replicas are available...
    3. deployment "nginx" successfully rolled out
    4. $ echo $?
    5. 0

    Failed Deployment

    Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. This can occurdue to some of the following factors:

    • Insufficient quota
    • Readiness probe failures
    • Image pull errors
    • Insufficient permissions
    • Limit ranges
    • Application runtime misconfiguration

    One way you can detect this condition is to specify a deadline parameter in your Deployment spec:(spec.progressDeadlineSeconds). spec.progressDeadlineSeconds denotes thenumber of seconds the Deployment controller waits before indicating (in the Deployment status) that theDeployment progress has stalled.

    The following kubectl command sets the spec with progressDeadlineSeconds to make the controller reportlack of progress for a Deployment after 10 minutes:

    1. $ kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
    2. "nginx-deployment" patched

    Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the followingattributes to the Deployment’s status.conditions:

    • Type=Progressing
    • Status=False
    • Reason=ProgressDeadlineExceeded

    See the Kubernetes API conventions for more information on status conditions.

    注意: Kubernetes will take no action on a stalled Deployment other than to report a status condition withReason=ProgressDeadlineExceeded. Higher level orchestrators can take advantage of it and act accordingly, forexample, rollback the Deployment to its previous version.
    注意: If you pause a Deployment, Kubernetes does not check progress against your specified deadline. You cansafely pause a Deployment in the middle of a rollout and resume without triggering the condition for exceeding thedeadline.

    You may experience transient errors with your Deployments, either due to a low timeout that you have set ordue to any other kind of error that can be treated as transient. For example, let’s suppose you haveinsufficient quota. If you describe the Deployment you will notice the following section:

    1. $ kubectl describe deployment nginx-deployment
    2. <...>
    3. Conditions:
    4. Type Status Reason
    5. ---- ------ ------
    6. Available True MinimumReplicasAvailable
    7. Progressing True ReplicaSetUpdated
    8. ReplicaFailure True FailedCreate
    9. <...>

    If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status might look like this:

    1. status:
    2. availableReplicas: 2
    3. conditions:
    4. - lastTransitionTime: 2016-10-04T12:25:39Z
    5. lastUpdateTime: 2016-10-04T12:25:39Z
    6. message: Replica set "nginx-deployment-4262182780" is progressing.
    7. reason: ReplicaSetUpdated
    8. status: "True"
    9. type: Progressing
    10. - lastTransitionTime: 2016-10-04T12:25:42Z
    11. lastUpdateTime: 2016-10-04T12:25:42Z
    12. message: Deployment has minimum availability.
    13. reason: MinimumReplicasAvailable
    14. status: "True"
    15. type: Available
    16. - lastTransitionTime: 2016-10-04T12:25:39Z
    17. lastUpdateTime: 2016-10-04T12:25:39Z
    18. message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
    19. object-counts, requested: pods=1, used: pods=3, limited: pods=2'
    20. reason: FailedCreate
    21. status: "True"
    22. type: ReplicaFailure
    23. observedGeneration: 3
    24. replicas: 2
    25. unavailableReplicas: 2

    Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and thereason for the Progressing condition:

    1. Conditions:
    2. Type Status Reason
    3. ---- ------ ------
    4. Available True MinimumReplicasAvailable
    5. Progressing False ProgressDeadlineExceeded
    6. ReplicaFailure True FailedCreate

    You can address an issue of insufficient quota by scaling down your Deployment, by scaling down othercontrollers you may be running, or by increasing quota in your namespace. If you satisfy the quotaconditions and the Deployment controller then completes the Deployment rollout, you’ll see theDeployment’s status update with a successful condition (Status=True and Reason=NewReplicaSetAvailable).

    1. Conditions:
    2. Type Status Reason
    3. ---- ------ ------
    4. Available True MinimumReplicasAvailable
    5. Progressing True NewReplicaSetAvailable

    Type=Available with Status=True means that your Deployment has minimum availability. Minimum availability is dictatedby the parameters specified in the deployment strategy. Type=Progressing with Status=True means that your Deploymentis either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimumrequired new replicas are available (see the Reason of the condition for the particulars - in our caseReason=NewReplicaSetAvailable means that the Deployment is complete).

    You can check if a Deployment has failed to progress by using kubectl rollout status. kubectl rollout statusreturns a non-zero exit code if the Deployment has exceeded the progression deadline.

    1. $ kubectl rollout status deploy/nginx-deployment
    2. Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
    3. error: deployment "nginx" exceeded its progress deadline
    4. $ echo $?
    5. 1

    Operating on a failed deployment

    All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll backto a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment pod template.

    Clean up Policy

    You can set .spec.revisionHistoryLimit field in a Deployment to specify how many old ReplicaSets forthis Deployment you want to retain. The rest will be garbage-collected in the background. By default,all revision history will be kept. In a future version, it will default to switch to 2.

    注意: Explicitly setting this field to 0, will result in cleaning up all the history of your Deploymentthus that Deployment will not be able to roll back.

    Use Cases

    Canary Deployment

    If you want to roll out releases to a subset of users or servers using the Deployment, youcan create multiple Deployments, one for each release, following the canary pattern described inmanaging resources.

    Writing a Deployment Spec

    As with all other Kubernetes configs, a Deployment needs apiVersion, kind, and metadata fields.For general information about working with config files, see deploying applications,configuring containers, and using kubectl to manage resources documents.

    A Deployment also needs a .spec section.

    Pod Template

    The .spec.template is the only required field of the .spec.

    The .spec.template is a pod template. It has exactly the same schema as a Pod, except it is nested and does not have anapiVersion or kind.

    In addition to required fields for a Pod, a pod template in a Deployment must specify appropriatelabels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See selector).

    Only a .spec.template.spec.restartPolicy equal to Always isallowed, which is the default if not specified.

    Replicas

    .spec.replicas is an optional field that specifies the number of desired Pods. It defaults to 1.

    Selector

    .spec.selector is an optional field that specifies a label selectorfor the Pods targeted by this deployment.

    If specified, .spec.selector must match .spec.template.metadata.labels, or it will be rejected bythe API. If .spec.selector is unspecified, .spec.selector.matchLabels defaults to.spec.template.metadata.labels.

    A Deployment may terminate Pods whose labels match the selector if their template is differentfrom .spec.template or if the total number of such Pods exceeds .spec.replicas. It brings up newPods with .spec.template if the number of Pods is less than the desired number.

    注意: You should not create other pods whose labels match this selector, either directly, by creatinganother Deployment, or by creating another controller such as a ReplicaSet or a ReplicationController. If youdo so, the first Deployment thinks that it created these other pods. Kubernetes does not stop you from doing this.

    If you have multiple controllers that have overlapping selectors, the controllers will fight with eachother and won’t behave correctly.

    Strategy

    .spec.strategy specifies the strategy used to replace old Pods by new ones..spec.strategy.type can be “Recreate” or “RollingUpdate”. “RollingUpdate” isthe default value.

    Recreate Deployment

    All existing Pods are killed before new ones are created when .spec.strategy.type==Recreate.

    Rolling Update Deployment

    The Deployment updates Pods in a rolling updatefashion when .spec.strategy.type==RollingUpdate. You can specify maxUnavailable and maxSurge to controlthe rolling update process.

    Max Unavailable

    .spec.strategy.rollingUpdate.maxUnavailable is an optional field that specifies the maximum numberof Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5)or a percentage of desired Pods (for example, 10%). The absolute number is calculated from percentage byrounding down. The value cannot be 0 if .spec.strategy.rollingUpdate.maxSurge is 0. The default value is 25%.

    For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of desiredPods immediately when the rolling update starts. Once new Pods are ready, old ReplicaSet can be scaleddown further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods availableat all times during the update is at least 70% of the desired Pods.

    Max Surge

    .spec.strategy.rollingUpdate.maxSurge is an optional field that specifies the maximum number of Podsthat can be created over the desired number of Pods. The value can be an absolute number (for example, 5) or apercentage of desired Pods (for example, 10%). The value cannot be 0 if MaxUnavailable is 0. The absolute numberis calculated from the percentage by rounding up. The default value is 25%.

    For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when therolling update starts, such that the total number of old and new Pods does not exceed 130% of desiredPods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that thetotal number of Pods running at any time during the update is at most 130% of desired Pods.

    Progress Deadline Seconds

    .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you wantto wait for your Deployment to progress before the system reports back that the Deployment hasfailed progressing - surfaced as a condition with Type=Progressing, Status=False.and Reason=ProgressDeadlineExceeded in the status of the resource. The deployment controller will keepretrying the Deployment. In the future, once automatic rollback will be implemented, the deploymentcontroller will roll back a Deployment as soon as it observes such a condition.

    If specified, this field needs to be greater than .spec.minReadySeconds.

    Min Ready Seconds

    .spec.minReadySeconds is an optional field that specifies the minimum number of seconds for which a newlycreated Pod should be ready without any of its containers crashing, for it to be considered available.This defaults to 0 (the Pod will be considered available as soon as it is ready). To learn more about whena Pod is considered ready, see Container Probes.

    Rollback To

    .spec.rollbackTo is an optional field with the configuration the Deploymentshould roll back to. Setting this field triggers a rollback, and this field willbe cleared by the server after a rollback is done.

    Because this field will be cleared by the server, it should not be useddeclaratively. For example, you should not perform kubectl apply with amanifest with .spec.rollbackTo field set.

    Revision

    .spec.rollbackTo.revision is an optional field specifying the revision to rollback to. Setting to 0 means rolling back to the last revision in history;otherwise, means rolling back to the specified revision. This defaults to 0 whenspec.rollbackTo is set.

    Revision History Limit

    A Deployment’s revision history is stored in the replica sets it controls.

    .spec.revisionHistoryLimit is an optional field that specifies the number of old ReplicaSets to retainto allow rollback. Its ideal value depends on the frequency and stability of new Deployments. All oldReplicaSets will be kept by default, consuming resources in etcd and crowding the output of kubectl get rs,if this field is not set. The configuration of each Deployment revision is stored in its ReplicaSets;therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment.

    More specifically, setting this field to zero means that all old ReplicaSets with 0 replica will be cleaned up.In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up.

    Paused

    .spec.paused is an optional boolean field for pausing and resuming a Deployment. The only difference betweena paused Deployment and one that is not paused, is that any changes into the PodTemplateSpec of the pausedDeployment will not trigger new rollouts as long as it is paused. A Deployment is not paused by default whenit is created.

    Alternative to Deployments

    kubectl rolling update

    Kubectl rolling update updates Pods and ReplicationControllersin a similar fashion. But Deployments are recommended, since they are declarative, server side, and haveadditional features, such as rolling back to any previous revision even after the rolling update is done.

    反馈

    此页是否对您有帮助?

    感谢反馈。如果您有一个关于如何使用 Kubernetes 的特定的、需要答案的问题,可以访问Stack Overflow.在 GitHub 仓库上登记新的问题报告问题或者提出改进建议.