Kubenete study notes (Replication Controller)

2021-04-09 15:26

阅读:513

标签:strong   mit   ted   pull   cat   ext   sts   cron   term   

Replication controller:

ReplicationController schedules pod to one work node, kubelet application in node pulls image and create containers.
Pod started by “kubectl run” is not created directly. ReplicationController is created by command and replicationController creates pod
Create pod: kubectl run [replication controller name] --image=[image name] --port=[port number] --generator=run/v1. Without --generate flag, it creates deployment instead of replicationController
Replication controller is able to scale number of pods: kubectl scale rc [replication controller] --replicas=3
Replication controller ensures number of pods by label selector matches desired number (replica count), if not, it starts new pod or delete existing pod
Pods started by replication controller via kubectl run command can not be deleted, a new pod will be bring up, need to delete replication controller

Defining liveness probe allow

spec: 
  containers: 
    livenessProbe: 
      httpGet: 
        path: / 
        port: 8080

Spec/containers/livenessProbe/initialDelaySeconds is important to ensure initial start up success before sending first liveness probe request

Troubleshoot pod failure:
kubectl describe po [podname] Last state and events can show previous pod problem
kubectl logs [pod name] --previous shows log of previous crashed pod

Define good liveness probe:

  • Use /health endpoint
  • Keep probe lightweight
  • No retry loop in probe is required

Replication controller definition contains pod template:

spec:
  template: 
    metadata: 
      labels: 
        app: kubia 
    spec: 
      containers: 
        - name: kubia 
          image: luksa/kubia 
          ports: 
            - containerPort: 8080 

Don’t specify a pod selector when defining a ReplicationController. Let Kubernetes extract it from the pod template, if replicationController’s label selector does not match pod template, kubernetes api server will report error
Changes to replication controller’s label selector and the pod template have no effect on existing pods.
By changing a pod’s labels, it can be removed from or added to the scope of a ReplicationController. It can even be moved from one ReplicationController to
another.
Pod’s metadata.ownerReferences shows replication controller information

Compare replication set(apiVersion: apps/v1beta2 kind: ReplicaSet) and replication controller:
Selector expression under in selector.matchLabels for replication set
Support a rich set of selector expression like ‘IN’, ‘NOT IN’, ‘EXISTS’, ‘DOES NOT EXIST’

DaemonSet: (apiVersion: apps/v1beta2 kind: DaemonSet)
Use DaemonSet to start exactly one pod on each node
Used for common services like log collector etc
Auto adapt to node addition and create pod
No replication count required
Can specify node-selector to deploy pods to a subset of nodes
Deploying pods via daemonset bypasses kubenetes scheduler

JobResource: (apiVersion: batch/v1 kind: Job )
Used for run-once adhoc task
Set spec: restartPolicy: OnFailure/Never for handling job execution failure
Set spec: completions: 5 (number of execution) parallelism: 2 (number of parallel nodes) to allow multiple times and parallel execution of job
Set spec:activeDeadlineSeconds to terminate long run job. Set spec:backoffLimit for number of retries before marking job as fail

Cronjob: (apiVersion: batch/v1beta1 kind: CronJob)
Set spec: schedule: "0,15,30,45 "

Kubenete study notes (Replication Controller)

标签:strong   mit   ted   pull   cat   ext   sts   cron   term   

原文地址:https://blog.51cto.com/shadowisper/2476298


评论


亲,登录后才可以留言!