• ReplicaSet
    • How a ReplicaSet works
    • When to use a ReplicaSet
    • Example
    • Non-Template Pod acquisitions
    • Writing a ReplicaSet manifest
      • Pod Template
      • Pod Selector
      • Replicas
    • Working with ReplicaSets
      • Deleting a ReplicaSet and its Pods
      • Deleting just a ReplicaSet
      • Isolating Pods from a ReplicaSet
      • Scaling a ReplicaSet
      • ReplicaSet as a Horizontal Pod Autoscaler Target
    • Alternatives to ReplicaSet
      • Deployment (recommended)
      • Bare Pods
      • Job
      • DaemonSet
      • ReplicationController
    • Feedback

    ReplicaSet

    A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is oftenused to guarantee the availability of a specified number of identical Pods.

    How a ReplicaSet works

    A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a numberof replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Podsit should create to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creatingand deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Podtemplate.

    The link a ReplicaSet has to its Pods is via the Pods’ metadata.ownerReferencesfield, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owningReplicaSet’s identifying information within their ownerReferences field. It’s through this link that the ReplicaSetknows of the state of the Pods it is maintaining and plans accordingly.

    A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or theOwnerReference is not a ControllerA control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state. and it matches a ReplicaSet’s selector, it will be immediately acquired by saidReplicaSet.

    When to use a ReplicaSet

    A ReplicaSet ensures that a specified number of pod replicas are running at any giventime. However, a Deployment is a higher-level concept that manages ReplicaSets andprovides declarative updates to Pods along with a lot of other useful features.Therefore, we recommend using Deployments instead of directly using ReplicaSets, unlessyou require custom update orchestration or don’t require updates at all.

    This actually means that you may never need to manipulate ReplicaSet objects:use a Deployment instead, and define your application in the spec section.

    Example

    controllers/frontend.yamlReplicaSet (EN) - 图1
    1. apiVersion: apps/v1kind: ReplicaSetmetadata: name: frontend labels: app: guestbook tier: frontendspec: # modify replicas according to your case replicas: 3 selector: matchLabels: tier: frontend template: metadata: labels: tier: frontend spec: containers: - name: php-redis image: gcr.io/google_samples/gb-frontend:v3

    Saving this manifest into frontend.yaml and submitting it to a Kubernetes cluster willcreate the defined ReplicaSet and the Pods that it manages.

    1. kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml

    You can then get the current ReplicaSets deployed:

    1. kubectl get rs

    And see the frontend one you created:

    1. NAME DESIRED CURRENT READY AGE
    2. frontend 3 3 3 6s

    You can also check on the state of the replicaset:

    1. kubectl describe rs/frontend

    And you will see output similar to:

    1. Name: frontend
    2. Namespace: default
    3. Selector: tier=frontend,tier in (frontend)
    4. Labels: app=guestbook
    5. tier=frontend
    6. Annotations: <none>
    7. Replicas: 3 current / 3 desired
    8. Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
    9. Pod Template:
    10. Labels: app=guestbook
    11. tier=frontend
    12. Containers:
    13. php-redis:
    14. Image: gcr.io/google_samples/gb-frontend:v3
    15. Port: 80/TCP
    16. Requests:
    17. cpu: 100m
    18. memory: 100Mi
    19. Environment:
    20. GET_HOSTS_FROM: dns
    21. Mounts: <none>
    22. Volumes: <none>
    23. Events:
    24. FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    25. --------- -------- ----- ---- ------------- -------- ------ -------
    26. 1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-qhloh
    27. 1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-dnjpy
    28. 1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-9si5l

    And lastly you can check for the Pods brought up:

    1. kubectl get Pods

    You should see Pod information similar to:

    1. NAME READY STATUS RESTARTS AGE
    2. frontend-9si5l 1/1 Running 0 1m
    3. frontend-dnjpy 1/1 Running 0 1m
    4. frontend-qhloh 1/1 Running 0 1m

    You can also verify that the owner reference of these pods is set to the frontend ReplicaSet.To do this, get the yaml of one of the Pods running:

    1. kubectl get pods frontend-9si5l -o yaml

    The output will look similar to this, with the frontend ReplicaSet’s info set in the metadata’s ownerReferences field:

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. creationTimestamp: 2019-01-31T17:20:41Z
    5. generateName: frontend-
    6. labels:
    7. tier: frontend
    8. name: frontend-9si5l
    9. namespace: default
    10. ownerReferences:
    11. - apiVersion: extensions/v1beta1
    12. blockOwnerDeletion: true
    13. controller: true
    14. kind: ReplicaSet
    15. name: frontend
    16. uid: 892a2330-257c-11e9-aecd-025000000001
    17. ...

    Non-Template Pod acquisitions

    While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not havelabels which match the selector of one of your ReplicaSets. The reason for this is because a ReplicaSet is not limitedto owning Pods specified by its template– it can acquire other Pods in the manner specified in the previous sections.

    Take the previous frontend ReplicaSet example, and the Pods specified in the following manifest:

    pods/pod-rs.yamlReplicaSet (EN) - 图2
    1. apiVersion: v1kind: Podmetadata: name: pod1 labels: tier: frontendspec: containers: - name: hello1 image: gcr.io/google-samples/hello-app:2.0—-apiVersion: v1kind: Podmetadata: name: pod2 labels: tier: frontendspec: containers: - name: hello2 image: gcr.io/google-samples/hello-app:1.0

    As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontendReplicaSet, they will immediately be acquired by it.

    Suppose you create the Pods after the frontend ReplicaSet has been deployed and has set up its initial Pod replicas tofulfill its replica count requirement:

    1. kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml

    The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be overits desired count.

    Fetching the Pods:

    1. kubectl get Pods

    The output shows that the new Pods are either already terminated, or in the process of being terminated:

    1. NAME READY STATUS RESTARTS AGE
    2. frontend-9si5l 1/1 Running 0 1m
    3. frontend-dnjpy 1/1 Running 0 1m
    4. frontend-qhloh 1/1 Running 0 1m
    5. pod2 0/1 Terminating 0 4s

    If you create the Pods first:

    1. kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml

    And then create the ReplicaSet however:

    1. kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml

    You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until thenumber of its new Pods and the original matches its desired count. As fetching the Pods:

    1. kubectl get Pods

    Will reveal in its output:

    1. NAME READY STATUS RESTARTS AGE
    2. frontend-pxj4r 1/1 Running 0 5s
    3. pod1 1/1 Running 0 13s
    4. pod2 1/1 Running 0 13s

    In this manner, a ReplicaSet can own a non-homogenous set of Pods

    Writing a ReplicaSet manifest

    As with all other Kubernetes API objects, a ReplicaSet needs the apiVersion, kind, and metadata fields.For ReplicaSets, the kind is always just ReplicaSet.In Kubernetes 1.9 the API version apps/v1 on the ReplicaSet kind is the current version and is enabled by default. The API version apps/v1beta2 is deprecated.Refer to the first lines of the frontend.yaml example for guidance.

    A ReplicaSet also needs a .spec section.

    Pod Template

    The .spec.template is a pod template which is alsorequired to have labels in place. In our frontend.yaml example we had one label: tier: frontend.Be careful not to overlap with the selectors of other controllers, lest they try to adopt this Pod.

    For the template’s restart policy field,.spec.template.spec.restartPolicy, the only allowed value is Always, which is the default.

    Pod Selector

    The .spec.selector field is a label selector. As discussedearlier these are the labels used to identify potential Pods to acquire. In ourfrontend.yaml example, the selector was:

    1. matchLabels:
    2. tier: frontend

    In the ReplicaSet, .spec.template.metadata.labels must match spec.selector, or it willbe rejected by the API.

    Note: For 2 ReplicaSets specifying the same .spec.selector but different .spec.template.metadata.labels and .spec.template.spec fields, each ReplicaSet ignores the Pods created by the other ReplicaSet.

    Replicas

    You can specify how many Pods should run concurrently by setting .spec.replicas. The ReplicaSet will create/deleteits Pods to match this number.

    If you do not specify .spec.replicas, then it defaults to 1.

    Working with ReplicaSets

    Deleting a ReplicaSet and its Pods

    To delete a ReplicaSet and all of its Pods, use kubectl delete. The Garbage collector automatically deletes all of the dependent Pods by default.

    When using the REST API or the client-go library, you must set propagationPolicy to Background or Foreground inthe -d option.For example:

    1. kubectl proxy --port=8080
    2. curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/frontend' \
    3. > -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
    4. > -H "Content-Type: application/json"

    Deleting just a ReplicaSet

    You can delete a ReplicaSet without affecting any of its Pods using kubectl delete with the —cascade=false option.When using the REST API or the client-go library, you must set propagationPolicy to Orphan.For example:

    1. kubectl proxy --port=8080
    2. curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/frontend' \
    3. > -d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
    4. > -H "Content-Type: application/json"

    Once the original is deleted, you can create a new ReplicaSet to replace it. As longas the old and new .spec.selector are the same, then the new one will adopt the old Pods.However, it will not make any effort to make existing Pods match a new, different pod template.To update Pods to a new spec in a controlled way, use aDeployment, as ReplicaSets do not support a rolling update directly.

    Isolating Pods from a ReplicaSet

    You can remove Pods from a ReplicaSet by changing their labels. This technique may be used to remove Podsfrom service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).

    Scaling a ReplicaSet

    A ReplicaSet can be easily scaled up or down by simply updating the .spec.replicas field. The ReplicaSet controllerensures that a desired number of Pods with a matching label selector are available and operational.

    ReplicaSet as a Horizontal Pod Autoscaler Target

    A ReplicaSet can also be a target forHorizontal Pod Autoscalers (HPA). That is,a ReplicaSet can be auto-scaled by an HPA. Here is an example HPA targetingthe ReplicaSet we created in the previous example.

    controllers/hpa-rs.yamlReplicaSet (EN) - 图3
    1. apiVersion: autoscaling/v1kind: HorizontalPodAutoscalermetadata: name: frontend-scalerspec: scaleTargetRef: kind: ReplicaSet name: frontend minReplicas: 3 maxReplicas: 10 targetCPUUtilizationPercentage: 50

    Saving this manifest into hpa-rs.yaml and submitting it to a Kubernetes cluster shouldcreate the defined HPA that autoscales the target ReplicaSet depending on the CPU usageof the replicated Pods.

    1. kubectl apply -f https://k8s.io/examples/controllers/hpa-rs.yaml

    Alternatively, you can use the kubectl autoscale command to accomplish the same(and it’s easier!)

    1. kubectl autoscale rs frontend --max=10

    Alternatives to ReplicaSet

    Deployment is an object which can own ReplicaSets and updatethem and their Pods via declarative, server-side rolling updates.While ReplicaSets can be used independently, today they’re mainly used by Deployments as a mechanism to orchestrate Podcreation, deletion and updates. When you use Deployments you don’t have to worry about managing the ReplicaSets thatthey create. Deployments own and manage their ReplicaSets.As such, it is recommended to use Deployments when you want ReplicaSets.

    Bare Pods

    Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node (for example, Kubelet or Docker).

    Job

    Use a Job instead of a ReplicaSet for Pods that are expected to terminate on their own(that is, batch jobs).

    DaemonSet

    Use a DaemonSet instead of a ReplicaSet for Pods that provide amachine-level function, such as machine monitoring or machine logging. These Pods have a lifetime that is tiedto a machine lifetime: the Pod needs to be running on the machine before other Pods start, and aresafe to terminate when the machine is otherwise ready to be rebooted/shutdown.

    ReplicationController

    ReplicaSets are the successors to ReplicationControllers.The two serve the same purpose, and behave similarly, except that a ReplicationController does not support set-basedselector requirements as described in the labels user guide.As such, ReplicaSets are preferred over ReplicationControllers

    Feedback

    Was this page helpful?

    Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it onStack Overflow.Open an issue in the GitHub repo if you want toreport a problemorsuggest an improvement.