kubeadm部署kubernetes v1.17.4 单master节点

2021-04-03 08:24

阅读:593

标签:answer   concepts   epo   utils   --   edit   nat   air   SHA256   

环境说明:

#操作系统:centos7
#docker版本:19.03.8
#kubernetes版本:v1.17.4
#K8S master 节点IP:192.168.3.62
#K8S worker节点IP:192.168.2.186
#网络插件:flannel
#kube-proxy网络转发: ipvs
#kubernetes源:使用阿里云源
#service-cidr:10.96.0.0/16
#pod-network-cidr:10.244.0.0/16

部署准备:

操作在所有节点进行
修改内核参数:
关闭swap
vim /etc/sysctl.conf
vm.swappiness=0
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl -p
临时生效
swapoff -a && sysctl -w vm.swappiness=0
修改 fstab 不在挂载 swap
vi /etc/fstab
/dev/mapper/centos-swap swap                    swap    defaults        0 0
安装docker
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
添加docker配置
mkdir -p /etc/docker
vim /etc/docker/daemon.json
{
"max-concurrent-downloads": 20,
"data-root": "/apps/docker/data",
"exec-root": "/apps/docker/root",
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"],
"log-driver": "json-file",
"bridge": "docker0",
"oom-score-adjust": -1000,
"debug": false,
"log-opts": {
"max-size": "100M",
"max-file": "10"
},
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 1024000,
"Soft": 1024000
},
"nproc": {
"Name": "nproc",
"Hard": 1024000,
"Soft": 1024000
},
"core": {
"Name": "core",
"Hard": -1,
"Soft": -1
}

}
}

安装依赖
yum install -y   yum-utils  ipvsadm  telnet  wget  net-tools  conntrack  ipset  jq  iptables  curl  sysstat  libseccomp  socat  nfs-utils  fuse  fuse-devel 
安装docker依赖
yum install -y    python-pip python-devel yum-utils device-mapper-persistent-data lvm2
安装docker 
yum install -y docker-ce
reload service 配置
systemctl daemon-reload
重启docker
systemctl restart docker
设置开机启动
systemctl enable docker
自动加载ipvs 创建开机加载
cat  /etc/sysconfig/modules/ipvs.modules
!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
/etc/sysconfig/modules/ipvs.modules 可执行权限
chmod +x /etc/sysconfig/modules/ipvs.modules
执行 /etc/sysconfig/modules/ipvs.modules
/etc/sysconfig/modules/ipvs.modules
----------------------------------
# kubernetes 源配置
cat 

初始化kubernetes

 # master节点初始化
 kubeadm init --apiserver-advertise-address=0.0.0.0                       --apiserver-cert-extra-sans=127.0.0.1                       --image-repository=registry.aliyuncs.com/google_containers                       --ignore-preflight-errors=all                        --kubernetes-version=v1.17.4                       --service-cidr=10.96.0.0/16                       --pod-network-cidr=10.244.0.0/16
#初始化内容:
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=0.0.0.0 > --apiserver-cert-extra-sans=127.0.0.1 > --image-repository=registry.aliyuncs.com/google_containers > --ignore-preflight-errors=all  > --kubernetes-version=v1.17.4 > --service-cidr=10.96.0.0/16 > --pod-network-cidr=10.244.0.0/16
W0321 15:48:22.675239    3126 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0321 15:48:22.675321    3126 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.4
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING FileExisting-tc]: tc not found in system path
        [WARNING Hostname]: hostname "k8s-master" could not be reached
        [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 192.168.1.169:53: no such host
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service‘
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull‘
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.3.62 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.3.62 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.3.62 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0321 15:49:11.580457    3126 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0321 15:49:11.581881    3126 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 30.003890 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=‘‘"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: fzyao0.q90my43drmpbstgw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.3.62:6443 --token fzyao0.q90my43drmpbstgw     --discovery-token-ca-cert-hash sha256:d7fb17be78dbaf019433e3d97423ab35d42800221a77f0b7d486e1c3e2544437
# 记录好最下面语句初始化Worker点使用
设置开机启动
systemctl enable kubelet

cp K8S 使用config 文件以后操作集群使用

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 验证master 是否部署成功
[root@k8s-master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}
[root@k8s-master ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.3.62:6443
KubeDNS is running at https://192.168.3.62:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump‘.
# 查看POD 是否部署
[root@k8s-master ~]# kubectl get pod -A
NAMESPACE     NAME                                 READY   STATUS              RESTARTS   AGE
kube-system   coredns-9d85f5447-5gltw              0/1     ContainerCreating   0          6m4s
kube-system   coredns-9d85f5447-v5q5l              0/1     ContainerCreating   0          6m4s
kube-system   etcd-k8s-master                      1/1     Running             0          6m5s
kube-system   kube-apiserver-k8s-master            1/1     Running             0          6m5s
kube-system   kube-controller-manager-k8s-master   1/1     Running             0          6m5s
kube-system   kube-proxy-frk98                     1/1     Running             0          6m4s
kube-system   kube-scheduler-k8s-master            1/1     Running             0          6m5s
# ContainerCreating 状态 是网络插件没部署
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   NotReady    master   6m59s   v1.17.4
#NotReady 是启用cni 没cni 配置文件 网络插件部署好进入正常

修改kube-proxy 数据转发为IPvs

# 查找kube-proxy 配置文件
kubectl -n kube-system get cm
[root@k8s-master ~]# kubectl -n kube-system get cm
NAME                                 DATA   AGE
coredns                              1      9m46s
extension-apiserver-authentication   6      9m51s
kube-proxy                           2      9m46s
kubeadm-config                       2      9m48s
kubelet-config-1.17                  1      9m48s
# 配置文件对应 kube-proxy   
# 修改kube-proxy 配置文件
kubectl -n kube-system edit cm kube-proxy
# Please edit the object below. Lines beginning with a ‘#‘ will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 10.244.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: true # 增加 –masquerade-all 选项,以确保反向流量通过
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: "0.0.0.0" #启用kube-proxy 监控指标metrics
    mode: "ipvs" # 开启ipvs 模式
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
      networkName: ""
      sourceVip: ""
  kubeconfig.conf: |-
    apiVersion: v1
    kind: Config
    clusters:
    - cluster:
        certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        server: https://192.168.3.62:6443
      name: default
    contexts:
    - context:
        cluster: default
        namespace: default
        user: default
      name: default
    current-context: default
    users:
    - name: default
      user:
        tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
  creationTimestamp: "2020-03-21T07:49:43Z"
  labels:
    app: kube-proxy
  name: kube-proxy
  namespace: kube-system
  resourceVersion: "228"
  selfLink: /api/v1/namespaces/kube-system/configmaps/kube-proxy
  uid: 4267c9ed-34f6-44d4-95ee-21cda3c4ba64         
# 其它的根据自己需要进行修改
# 保存修改文件:wq!
[root@k8s-master ~]# kubectl -n kube-system edit cm kube-proxy
configmap/kube-proxy edited
# 删除已经运行的kube-proxy pod
[root@k8s-master ~]# kubectl -n kube-system get pod
NAME                                 READY   STATUS              RESTARTS   AGE
coredns-9d85f5447-5gltw              0/1     ContainerCreating   0          21m
coredns-9d85f5447-v5q5l              0/1     ContainerCreating   0          21m
etcd-k8s-master                      1/1     Running             0          21m
kube-apiserver-k8s-master            1/1     Running             0          21m
kube-controller-manager-k8s-master   1/1     Running             0          21m
kube-proxy-frk98                     1/1     Running             0          21m
kube-scheduler-k8s-master            1/1     Running             0          21m
# 删除POD
[root@k8s-master ~]# kubectl -n kube-system delete pod kube-proxy-frk98
pod "kube-proxy-frk98" deleted
# 查看POD 是否启动成功
[root@k8s-master ~]# kubectl -n kube-system get pod|grep kube-proxy
kube-proxy-kkdl2                     1/1     Running             0          49s 
# 查看IPvs 是否启动成功
[root@k8s-master ~]# ip a | grep ipvs
4: kube-ipvs0:  mtu 1500 qdisc noop state DOWN group default
    inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0
    inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0
[root@k8s-master ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 192.168.3.62:6443            Masq    1      0          0
TCP  10.96.0.10:53 rr
TCP  10.96.0.10:9153 rr
UDP  10.96.0.10:53 rr
# 一切正常

部署网络插件flannel

# 你也可以部署你想用的插件
vim kube-flannel.yml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
     {
     "name":"cni0",
     "cniVersion":"0.3.1",
     "plugins":[
       {
         "type":"flannel",
         "delegate":{
           "forceAddress":false,
           "hairpinMode": true,
           "isDefaultGateway":true
         }
       },
       {
         "type":"portmap",
         "capabilities":{
           "portMappings":true
         }
       },
     {
       "name": "mytuning",
       "type": "tuning",
       "sysctl": {
               "net.core.somaxconn": "65535",
               "net.ipv4.ip_local_port_range": "1024 65535",
               "net.ipv4.tcp_keepalive_time": "600",
               "net.ipv4.tcp_keepalive_probes": "10",
               "net.ipv4.tcp_keepalive_intvl": "30"
       }
     }
     ]
     }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",  # 记得修改成你自己POD cird 
      "Backend": {
        "Type": "VXLAN",
        "Directrouting": true
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
# 部署 flannel
 kubectl apply -f kube-flannel.yml
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
# 查看flannel 是否部署成功
[root@k8s-master ~]# kubectl -n kube-system get pod|grep flannel
kube-flannel-ds-amd64-6lmq7          1/1     Running   0          54s
# 查看网卡
ip a
5: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 4a:9c:77:15:fc:af brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::489c:77ff:fe15:fcaf/64 scope link
       valid_lft forever preferred_lft forever
6: cni0:  mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:58:0a:f4:00:01 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::5861:f7ff:febf:8c96/64 scope link
       valid_lft forever preferred_lft forever
# 查看node 状态 跟刚刚不能不是pod 状态
[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   30m   v1.17.4
# 查看POD 状态
[root@k8s-master ~]# kubectl get pod -A
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-9d85f5447-5gltw              1/1     Running   0          30m
kube-system   coredns-9d85f5447-v5q5l              1/1     Running   0          30m
kube-system   etcd-k8s-master                      1/1     Running   0          30m
kube-system   kube-apiserver-k8s-master            1/1     Running   0          30m
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          30m
kube-system   kube-flannel-ds-amd64-6lmq7          1/1     Running   0          2m55s
kube-system   kube-proxy-kkdl2                     1/1     Running   0          8m49s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          30m
# coredns 已经正常部署完成
[root@k8s-master ~]# ipvsadm -ln -c
IPVS connection entries
pro expire state       source             virtual            destination
TCP 14:58  ESTABLISHED 10.244.0.3:59832   10.96.0.1:443      192.168.3.62:6443
TCP 14:40  ESTABLISHED 10.96.0.1:53774    10.96.0.1:443      192.168.3.62:6443
TCP 14:58  ESTABLISHED 10.244.0.2:42436   10.96.0.1:443      192.168.3.62:6443
[root@k8s-master ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 192.168.3.62:6443            Masq    1      3          0
TCP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.0.3:53                Masq    1      0          0
TCP  10.96.0.10:9153 rr
  -> 10.244.0.2:9153              Masq    1      0          0
  -> 10.244.0.3:9153              Masq    1      0          0
UDP  10.96.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.0.3:53                Masq    1      0          0
[root@k8s-master ~]# dig @10.96.0.10 www.baidu.com

; > DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el8 > @10.96.0.10 www.baidu.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER

部署Worker 节点

# 192.168.2.186 节点上面操作
kubeadm join 192.168.3.62:6443 --token fzyao0.q90my43drmpbstgw     --discovery-token-ca-cert-hash sha256:d7fb17be78dbaf019433e3d97423ab35d42800221a77f0b7d486e1c3e2544437
[root@nginx-1 ~]# kubeadm join 192.168.3.62:6443 --token fzyao0.q90my43drmpbstgw >     --discovery-token-ca-cert-hash sha256:d7fb17be78dbaf019433e3d97423ab35d42800221a77f0b7d486e1c3e2544437
W0321 16:23:21.870029    1700 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service‘
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes‘ on the control-plane to see this node join the cluster.
# 查看节点部署是否正常
[root@k8s-master ~]# kubectl get nodes -o wide
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-master   Ready    master   35m   v1.17.4   192.168.3.62            CentOS Linux 8 (Core)   4.18.0-147.5.1.el8_1.x86_64   docker://19.3.8
nginx-1      Ready       64s   v1.17.4   192.168.2.186           CentOS Linux 7 (Core)   3.10.0-1062.1.2.el7.x86_64    docker://19.3.8
# 节点已经正常
设置开机启动
systemctl enable kubelet

注意事项

#kubeadm token 默认时间是24 小时,过期记得从新生成token 然后加入节点
# 查看token
kubeadm   token list
# 创建token
kubeadm   token create
#忘记初始master节点时的node节点加入集群命令怎么办
# 简单方法
kubeadm token create --print-join-command
# 第二种方法
token=$(kubeadm token generate)
kubeadm token create $token --print-join-command --ttl=0
# 接下来就可以部署监控,第一个应用等等

kubeadm部署kubernetes v1.17.4 单master节点

标签:answer   concepts   epo   utils   --   edit   nat   air   SHA256   

原文地址:https://blog.51cto.com/juestnow/2480667

上一篇:Lucene介绍

下一篇:手写webpack (3)


评论


亲,登录后才可以留言!