Kubenete study notes (Replication Controller)
2021-04-09 15:26
标签:strong mit ted pull cat ext sts cron term ReplicationController schedules pod to one work node, kubelet application in node pulls image and create containers. Defining liveness probe allow Spec/containers/livenessProbe/initialDelaySeconds is important to ensure initial start up success before sending first liveness probe request Troubleshoot pod failure: Define good liveness probe: Replication controller definition contains pod template: Don’t specify a pod selector when defining a ReplicationController. Let Kubernetes extract it from the pod template, if replicationController’s label selector does not match pod template, kubernetes api server will report error Compare replication set(apiVersion: apps/v1beta2 kind: ReplicaSet) and replication controller: DaemonSet: (apiVersion: apps/v1beta2 kind: DaemonSet) JobResource: (apiVersion: batch/v1 kind: Job ) Cronjob: (apiVersion: batch/v1beta1 kind: CronJob) Kubenete study notes (Replication Controller) 标签:strong mit ted pull cat ext sts cron term 原文地址:https://blog.51cto.com/shadowisper/2476298
Pod started by “kubectl run” is not created directly. ReplicationController is created by command and replicationController creates pod
Create pod: kubectl run [replication controller name] --image=[image name] --port=[port number] --generator=run/v1. Without --generate flag, it creates deployment instead of replicationController
Replication controller is able to scale number of pods: kubectl scale rc [replication controller] --replicas=3
Replication controller ensures number of pods by label selector matches desired number (replica count), if not, it starts new pod or delete existing pod
Pods started by replication controller via kubectl run command can not be deleted, a new pod will be bring up, need to delete replication controllerspec:
containers:
livenessProbe:
httpGet:
path: /
port: 8080
kubectl describe po [podname] Last state and events can show previous pod problem
kubectl logs [pod name] --previous shows log of previous crashed pod
spec:
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia
ports:
- containerPort: 8080
Changes to replication controller’s label selector and the pod template have no effect on existing pods.
By changing a pod’s labels, it can be removed from or added to the scope of a ReplicationController. It can even be moved from one ReplicationController to
another.
Pod’s metadata.ownerReferences shows replication controller information
Selector expression under in selector.matchLabels for replication set
Support a rich set of selector expression like ‘IN’, ‘NOT IN’, ‘EXISTS’, ‘DOES NOT EXIST’
Use DaemonSet to start exactly one pod on each node
Used for common services like log collector etc
Auto adapt to node addition and create pod
No replication count required
Can specify node-selector to deploy pods to a subset of nodes
Deploying pods via daemonset bypasses kubenetes scheduler
Used for run-once adhoc task
Set spec: restartPolicy: OnFailure/Never for handling job execution failure
Set spec: completions: 5 (number of execution) parallelism: 2 (number of parallel nodes) to allow multiple times and parallel execution of job
Set spec:activeDeadlineSeconds to terminate long run job. Set spec:backoffLimit for number of retries before marking job as fail
Set spec: schedule: "0,15,30,45 "
文章标题:Kubenete study notes (Replication Controller)
文章链接:http://soscw.com/index.php/essay/73371.html