一步一步搞定Kubernetes二进制部署(三)——组件安装(单节点)

2021-03-11 16:30

阅读:314

标签:csv   相关   set   eal   pwd   逗号   load   and   负载   

一步一步搞定Kubernetes二进制部署(三)——组件安装(单节点)

前言

? 前面两篇文章我们将基础的环境构建完成,包括etcd集群(含证书创建)、flannel网络设置、docker引擎安装部署等,本文将在三台服务器上搞定此次单节点的二进制方式部署的Kubernetes集群。

master节点上进行配置

1、创建工作目录

[root@master01 k8s]# mkdir -p /opt/kubernetes/{cfg,bin,ssl}

2、部署apiserver组件

2.1制作apiserver证书

2.1.1创建apiserver证书目录,编写证书生成脚本

[root@master01 k8s]# mkdir k8s-cert
[root@master01 k8s]# cd k8s-cert/

[root@master01 k8s-cert]# cat k8s-cert.sh 
#先前已经在etcd集群搭建的时候给出该类文本的介绍和相关解释了,这里就不再赘述了主要注意下面的地址部分的规划写入
cat > ca-config.json  ca-csr.json  server-csr.json  admin-csr.json  kube-proxy-csr.json 

2.1.2执行脚本,并且将通信证书拷贝到方才创建的工作目录的ssl目录下

[root@master01 k8s-cert]# bash k8s-cert.sh 
#查看执行脚本之后的相关文件
[root@master01 k8s-cert]# ls
admin.csr       admin.pem       ca-csr.json  k8s-cert.sh          kube-proxy-key.pem  server-csr.json
admin-csr.json  ca-config.json  ca-key.pem   kube-proxy.csr       kube-proxy.pem      server-key.pem
admin-key.pem   ca.csr          ca.pem       kube-proxy-csr.json  server.csr     
#将安装apiserver组件前需要的证书存放到工作目录中
[root@master01 k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/
[root@master01 k8s-cert]# ls /opt/kubernetes/ssl/
ca-key.pem  ca.pem  server-key.pem  server.pem

2.2解压Kubernetes压缩包,拷贝命令工具到工作目录路径的bin目录下

软件包链接:
链接:https://pan.baidu.com/s/1COp94_Y47TU0G8-QSYb5Nw
提取码:ftzq

[root@master01 k8s]# ls
apiserver.sh  controller-manager.sh  etcd-v3.3.10-linux-amd64         k8s-cert                              master.zip
cfssl.sh      etcd-cert              etcd-v3.3.10-linux-amd64.tar.gz  kubernetes-server-linux-amd64.tar.gz  scheduler.sh
[root@master01 k8s]# tar zxf kubernetes-server-linux-amd64.tar.gz 
[root@master01 k8s]# ls
apiserver.sh           etcd-cert                        k8s-cert                              master.zip
cfssl.sh               etcd-v3.3.10-linux-amd64         kubernetes                            scheduler.sh
controller-manager.sh  etcd-v3.3.10-linux-amd64.tar.gz  kubernetes-server-linux-amd64.tar.gz
[root@master01 k8s]# ls kubernetes/ -R
kubernetes/:
addons  kubernetes-src.tar.gz  LICENSES  server

kubernetes/addons:

kubernetes/server:
bin

kubernetes/server/bin:
apiextensions-apiserver              kube-apiserver.docker_tag           kube-proxy
cloud-controller-manager             kube-apiserver.tar                  kube-proxy.docker_tag
cloud-controller-manager.docker_tag  kube-controller-manager             kube-proxy.tar
cloud-controller-manager.tar         kube-controller-manager.docker_tag  kube-scheduler
hyperkube                            kube-controller-manager.tar         kube-scheduler.docker_tag
kubeadm                              kubectl                             kube-scheduler.tar
kube-apiserver                       kubelet                             mounter

#进入命令目录移动需要的命令工具到先前创建的工作目录的bin目录下
[root@master01 k8s]# cd kubernetes/server/bin/
[root@master01 bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/

2.3制作token令牌

#执行命令生成随机序列号,将此序列号写入token.csv中
[root@master01 k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘
7f42570ec314322c3d629868855d406f

[root@master01 k8s]# cat /opt/kubernetes/cfg/token.csv
7f42570ec314322c3d629868855d406f,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
#以逗号间隔,分别表示序列号,用户名,id,角色

2.4开启apiserver服务

编写apiserver脚本

[root@master01 k8s]# vim apiserver.sh
#!/bin/bash

MASTER_ADDRESS=$1
ETCD_SERVERS=$2

#在k8s工作目录里生成kube-apiserver 配置文件
cat /opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \--v=4 \--etcd-servers=${ETCD_SERVERS} \--bind-address=${MASTER_ADDRESS} \--secure-port=6443 \--advertise-address=${MASTER_ADDRESS} \--allow-privileged=true \--service-cluster-ip-range=10.0.0.0/24 \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \--authorization-mode=RBAC,Node \--kubelet-https=true \--enable-bootstrap-token-auth \--token-auth-file=/opt/kubernetes/cfg/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/opt/kubernetes/ssl/server.pem  \--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \--client-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \--etcd-cafile=/opt/etcd/ssl/ca.pem \--etcd-certfile=/opt/etcd/ssl/server.pem \--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

#生成启动脚本
cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

#启动apiserver组件
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
[root@master01 k8s]# bash apiserver.sh 192.168.0.128 https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379
#检查进程是否启动成功
[root@master01 k8s]# ps aux | grep kube
root      56487 36.9 16.6 397952 311740 ?       Ssl  19:42   0:07 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379 --bind-address=192.168.0.128 --secure-port=6443 --advertise-address=192.168.0.128 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      56503  0.0  0.0 112676   984 pts/4    R+   19:43   0:00 grep --color=auto kube

查看配置文件

[root@master01 k8s]# cat /opt/kubernetes/cfg/kube-apiserver 

KUBE_APISERVER_OPTS="--logtostderr=true --v=4 --etcd-servers=https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379 --bind-address=192.168.0.128 --secure-port=6443 --advertise-address=192.168.0.128 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem  --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem"

#查看监听的https端口

[root@master01 k8s]# netstat -natp | grep 6443
tcp        0      0 192.168.0.128:6443      0.0.0.0:*               LISTEN      56487/kube-apiserve 
tcp        0      0 192.168.0.128:6443      192.168.0.128:45162     ESTABLISHED 56487/kube-apiserve 
tcp        0      0 192.168.0.128:45162     192.168.0.128:6443      ESTABLISHED 56487/kube-apiserve 
[root@master01 k8s]# netstat -natp | grep 8080
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      56487/kube-apiserve 
[root@master01 k8s]# 

3、启动scheduler服务

[root@master01 k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

scheduler.sh脚本如下:

#!/bin/bash

MASTER_ADDRESS=$1

cat /opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true \--v=4 \--master=${MASTER_ADDRESS}:8080 \--leader-elect"

EOF

cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

检查进程

[root@master01 k8s]# ps aux | grep kube-scheudler
root      56652  0.0  0.0 112676   988 pts/4    S+   19:49   0:00 grep --color=auto kube-scheudler

4、启动controller-manager服务

通过脚本启动

[root@master01 k8s]#  ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

脚本内容如下

#!/bin/bash

MASTER_ADDRESS=$1

cat /opt/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \--v=4 \--master=${MASTER_ADDRESS}:8080 \--leader-elect=true \--address=127.0.0.1 \--service-cluster-ip-range=10.0.0.0/24 \--cluster-name=kubernetes \--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \--root-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \--experimental-cluster-signing-duration=87600h0m0s"

EOF

cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

此时查看master节点状态

[root@master01 k8s]# /opt/kubernetes/bin/kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   

状态为健康则说明目前配置是没有问题的

接下来需要的就是在节点上的部署了

Node节点部署

首先需要将一些文件或命令工具远程拷贝到node节点上去,因此有些文件需要从master节点上编写远程拷贝过去

1、在master节点上把 kubelet、kube-proxy拷贝到node节点上去

[root@master01 bin]# pwd
/root/k8s/kubernetes/server/bin
[root@master01 bin]# scp kubelet kube-proxy root@192.168.0.129:/opt/kubernetes/bin/
root@192.168.0.129‘s password: 
kubelet                                                                                 100%  168MB  84.2MB/s   00:02    
kube-proxy                                                                              100%   48MB 104.6MB/s   00:00    
[root@master01 bin]# scp kubelet kube-proxy root@192.168.0.130:/opt/kubernetes/bin/
root@192.168.0.130‘s password: 
kubelet                                                                                 100%  168MB 123.6MB/s   00:01    
kube-proxy                                                                              100%   48MB 114.6MB/s   00:00    

2、在master节点上创建配置目录,并且编写配置脚本

[root@master01 k8s]# mkdir kubeconfig
[root@master01 k8s]# cd kubeconfig/

[root@master01 kubeconfig]# cat kubeconfig 
APISERVER=$1
SSL_DIR=$2

# 创建kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"

# 设置集群参数
kubectl config set-cluster kubernetes   --certificate-authority=$SSL_DIR/ca.pem   --embed-certs=true   --server=${KUBE_APISERVER}   --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap   --token=7f42570ec314322c3d629868855d406f   --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default   --cluster=kubernetes   --user=kubelet-bootstrap   --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes   --certificate-authority=$SSL_DIR/ca.pem   --embed-certs=true   --server=${KUBE_APISERVER}   --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy   --client-certificate=$SSL_DIR/kube-proxy.pem   --client-key=$SSL_DIR/kube-proxy-key.pem   --embed-certs=true   --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default   --cluster=kubernetes   --user=kube-proxy   --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
[root@master01 kubeconfig]# 

设置环境变量

[root@master01 kubeconfig]# vim /etc/profile
#将该行命令写入到此文件末尾
export PATH=$PATH:/opt/kubernetes/bin/
[root@master01 kubeconfig]# source /etc/profile
[root@master01 kubeconfig]# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/opt/kubernetes/bin/
#查看集群状态
[root@master01 kubeconfig]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
controller-manager   Healthy   ok                  

2、生成配置文件

[root@master01 k8s-cert]# cd -
/root/k8s/kubeconfig
[root@master01 kubeconfig]#  bash kubeconfig 192.168.0.128 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
#查看生成的配置文件(两个)
[root@master01 kubeconfig]# ls
bootstrap.kubeconfig  kubeconfig  kube-proxy.kubeconfig

3、将这两个配置文件拷贝到node节点上

[root@master01 kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.0.129:/opt/kubernetes/cfg/
root@192.168.0.129‘s password: 
bootstrap.kubeconfig                                                    100% 2166     1.2MB/s   00:00    
kube-proxy.kubeconfig                                                   100% 6268     8.1MB/s   00:00    
[root@master01 kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.0.130:/opt/kubernetes/cfg/
root@192.168.0.130‘s password: 
bootstrap.kubeconfig                                                                    100% 2166     1.4MB/s   00:00    
kube-proxy.kubeconfig                                                                   100% 6268     7.4MB/s   00:00    
[root@master01 kubeconfig]# 

4、创建bootstrap角色赋予权限用于连接apiserver请求签名,此步骤非常关键

[root@master01 kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
#执行结果如下
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

node节点上的操作

在两个节点上都开启kubelet服务

[root@node01 opt]# bash kubelet.sh 192.168.0.129 #第二个为130
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

[root@node01 opt]# ps aux | grep kubelet
root      73575  1.0  1.0 535312 42456 ?        Ssl  20:14   0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.0.129 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root      73651  0.0  0.0 112676   984 pts/3    R+   20:15   0:00 grep --color=auto kubelet

在master节点上验证

[root@master01 kubeconfig]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk   8s    kubelet-bootstrap   Pending
node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg   24s   kubelet-bootstrap   Pending
[root@master01 kubeconfig]# 

PS:pending表示等待集群给该节点颁发证书

[root@master01 kubeconfig]# kubectl certificate approve node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk
certificatesigningrequest.certificates.k8s.io/node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk approved
[root@master01 kubeconfig]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk   3m46s   kubelet-bootstrap   Approved,Issued
node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg   4m2s    kubelet-bootstrap   Pending

PS:Approved,Issued表示已经被允许加入集群中

#查看集群节点,成功加入node02节点


[root@master01 kubeconfig]#  kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.168.0.130   Ready       69s   v1.12.3

此时顺便也将node01搞定

[root@master01 kubeconfig]# kubectl certificate approve node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg
certificatesigningrequest.certificates.k8s.io/node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg approved
[root@master01 kubeconfig]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk   6m20s   kubelet-bootstrap   Approved,Issued
node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg   6m36s   kubelet-bootstrap   Approved,Issued
[root@master01 kubeconfig]#  kubectl get node
NAME            STATUS   ROLES    AGE     VERSION
192.168.0.129   Ready       7s      v1.12.3
192.168.0.130   Ready       2m55s   v1.12.3

在两个节点上启动代理proxy服务

[root@node01 opt]# bash proxy.sh 192.168.0.129
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
#检查proxy服务状态
[root@node01 opt]# systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2020-05-04 20:45:26 CST; 1min 9s ago
 Main PID: 77325 (kube-proxy)
   Memory: 7.6M
   CGroup: /system.slice/kube-proxy.service
           ? 77325 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168....

至此单节点的Kubernetes集群已经配置完毕了,我分为了三篇文章一步一步来做的。

最后还是给大家看一下集群中node节点的配置文件内容

node01节点
[root@node01 cfg]# cat kubelet

KUBELET_OPTS="--logtostderr=true --v=4 --hostname-override=192.168.0.129 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

[root@node01 cfg]# cat kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true --v=4 --hostname-override=192.168.0.129 --cluster-cidr=10.0.0.0/24 --proxy-mode=ipvs --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

node02节点
[root@node02 cfg]# cat kubelet

KUBELET_OPTS="--logtostderr=true --v=4 --hostname-override=192.168.0.130 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

[root@node02 cfg]# cat kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true --v=4 --hostname-override=192.168.0.130 --cluster-cidr=10.0.0.0/24 --proxy-mode=ipvs --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

一步一步搞定Kubernetes二进制部署(三)——组件安装(单节点)

标签:csv   相关   set   eal   pwd   逗号   load   and   负载   

原文地址:https://blog.51cto.com/14557673/2492419


评论


亲,登录后才可以留言!