kubernetes(七)二进制安装-worker节点安装
2021-03-30 16:24
标签:gen error: manage rsa rgs rtl gate tst ati kubelet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。 kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。 为确保安全,部署时关闭了 kubelet 的非安全 http 端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster 的请求)。 创建 kubelet bootstrap kubeconfig 文件 分发 bootstrap kubeconfig 文件到所有 worker 节点 创建和分发 kubelet 参数配置文件 从 v1.10 开始,部分 kubelet 参数需在配置文件中配置,kubelet --help 会提示 为各节点创建和分发 kubelet 配置文件 创建和分发 kubelet 服务启动文件 安装分发kubelet服务文件 授予 kube-apiserver 访问 kubelet API 的权限 在执行 kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet 的 https 端口。这里定义 RBAC 规则,授权 apiserver 使用的证书(kubernetes.pem)对应的用户(CN:kubernetes-api)访问 kubelet API 的权限,详情参考kubelet-auth: Bootstrap Token Auth 和授予权限 kubelet 启动时查找 --kubeletconfig 参数对应的文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 发送证书签名请求 (CSR)。 kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证,认证通过后将请求的 user 设置为 system:bootstrap: 默认情况下,这个 user 和 group 没有创建 CSR 的权限, 需要创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定: 启动 kubelet 服务 kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。 注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。 遇到问题 启动kubelet后,使用 kubectl get csr 没有结果,查看kubelet出现错误 查看kube-api服务日志 原因,在kube-apiserver服务的启动文件中丢掉了下面的配置 追加上,重新启动kube-apiserver后解决 kubelet 启动后持续不断的产生csr,手动approve后还继续产生 查看 kubelet 情况 手动 approve csr 查看node信息 查看kubelet服务状态 创建 kube-proxy 证书和私钥 创建证书签名请求文件 生成证书和私钥 安装证书 创建 kubeconfig 文件 分发 kubeconfig 创建 kube-proxy 配置文件 分发kube-proxy 配置文件 创建kube-proxy服务启动文件 分发 kube-proxy服务启动文件: 启动 kube-proxy服务 检查启动结果 确保状态为 active (running),否则查看日志,确认原因 如果出现异常,通过如下命令查看 查看状态 kubernetes(七)二进制安装-worker节点安装 标签:gen error: manage rsa rgs rtl gate tst ati 原文地址:https://www.cnblogs.com/gaofeng-henu/p/12594633.html配置kubelet
cd /opt/k8s/work
export KUBE_APISERVER=https://192.168.0.107:6443
export node_name=slave
export BOOTSTRAP_TOKEN=$(kubeadm token create --description kubelet-bootstrap-token --groups system:bootstrappers:${node_name} --kubeconfig ~/.kube/config)
# 设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/cert/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kubelet-bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
cd /opt/k8s/work
export node_ip=192.168.0.114
scp kubelet-bootstrap.kubeconfig root@${node_ip}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
cd /opt/k8s/work
export CLUSTER_CIDR="172.30.0.0/16"
export NODE_IP=192.168.0.114
export CLUSTER_DNS_SVC_IP="10.254.0.2"
cat > kubelet-config.yaml
对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized;cd /opt/k8s/work
export node_ip=192.168.0.114
scp kubelet-config.yaml root@${node_ip}:/etc/kubernetes/kubelet-config.yaml
cd /opt/k8s/work
export K8S_DIR=/data/k8s/k8s
export NODE_NAME=slave
cat > kubelet.service
cd /opt/k8s/work
export node_ip=192.168.0.114
scp kubelet.service root@${node_ip}:/etc/systemd/system/kubelet.service
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes-api
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
export K8S_DIR=/data/k8s/k8s
export node_ip=192.168.0.114
ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
journalctl -u kubelet -a |grep -A 2 ‘certificate_manager.go‘
Failed while requesting a signed certificate from the master: cannot create certificate signing request: Unauthorized
root@master:/opt/k8s/work# journalctl -eu kube-apiserver
Unable to authenticate the request due to an error: invalid bearer token
--enable-bootstrap-token-auth \
原因是kube-controller-manager服务停止掉了,重新启动后解决
root@master:/opt/k8s/work# kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-kl5mg 49s system:bootstrap:5t989l Pending
csr-mrmkf 2m1s system:bootstrap:5t989l Pending
csr-ql68g 13s system:bootstrap:5t989l Pending
csr-rvl2v 84s system:bootstrap:5t989l Pending
root@master:/opt/k8s/work# kubectl get csr | grep Pending | awk ‘{print $1}‘ | xargs kubectl certificate approve
certificatesigningrequest.certificates.k8s.io/csr-kl5mg approved
certificatesigningrequest.certificates.k8s.io/csr-mrmkf approved
certificatesigningrequest.certificates.k8s.io/csr-ql68g approved
certificatesigningrequest.certificates.k8s.io/csr-rvl2v approved
root@master:/opt/k8s/work# kubectl get csr | grep Pending | awk ‘{print $1}‘ | xargs kubectl certificate approve
certificatesigningrequest.certificates.k8s.io/csr-f4smx approved
root@master:/opt/k8s/work# kubectl get nodes
NAME STATUS ROLES AGE VERSION
slave Ready
export node_ip=192.168.0.114
root@master:/opt/k8s/work# ssh root@${node_ip} "systemctl status kubelet.service"
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2020-02-10 22:48:41 CST; 12min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 15529 (kubelet)
Tasks: 19 (limit: 4541)
CGroup: /system.slice/kubelet.service
└─15529 /opt/k8s/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig --cert-dir=/etc/kubernetes/cert --root-dir=/data/k8s/k8s/kubelet --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/kubelet-config.yaml --hostname-override=slave --image-pull-progress-deadline=15m --volume-plugin-dir=/data/k8s/k8s/kubelet/kubelet-plugins/volume/exec/ --logtostderr=true --v=2
2月 10 22:49:04 slave kubelet[15529]: I0210 22:49:04.846285 15529 kubelet_node_status.go:73] Successfully registered node slave
2月 10 22:49:04 slave kubelet[15529]: I0210 22:49:04.930745 15529 certificate_manager.go:402] Rotating certificates
2月 10 22:49:14 slave kubelet[15529]: I0210 22:49:14.966351 15529 kubelet_node_status.go:486] Recording NodeReady event message for node slave
2月 10 22:49:29 slave kubelet[15529]: I0210 22:49:29.580410 15529 certificate_manager.go:531] Certificate expiration is 2030-02-06 04:19:00 +0000 UTC, rotation deadline is 2029-01-21 13:08:18.850930128 +0000 UTC
2月 10 22:49:29 slave kubelet[15529]: I0210 22:49:29.580484 15529 certificate_manager.go:281] Waiting 78430h18m49.270459727s for next certificate rotation
2月 10 22:49:30 slave kubelet[15529]: I0210 22:49:30.580981 15529 certificate_manager.go:531] Certificate expiration is 2030-02-06 04:19:00 +0000 UTC, rotation deadline is 2027-07-14 16:09:26.990162158 +0000 UTC
2月 10 22:49:30 slave kubelet[15529]: I0210 22:49:30.581096 15529 certificate_manager.go:281] Waiting 65065h19m56.409078053s for next certificate rotation
2月 10 22:53:44 slave kubelet[15529]: I0210 22:53:44.911705 15529 kubelet.go:1312] Image garbage collection succeeded
2月 10 22:53:45 slave kubelet[15529]: I0210 22:53:45.053792 15529 container_manager_linux.go:469] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
2月 10 22:58:45 slave kubelet[15529]: I0210 22:58:45.054225 15529 container_manager_linux.go:469] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.servic
配置kube-proxy 组件
cd /opt/k8s/work
cat > kube-proxy-csr.json
cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*pem
cd /opt/k8s/work
export node_ip=192.168.0.114
scp kube-proxy*.pem root@${node_ip}:/etc/kubernetes/cert/
cd /opt/k8s/work
export KUBE_APISERVER=https://192.168.0.107:6443
kubectl config set-cluster kubernetes --certificate-authority=/opt/k8s/work/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
cd /opt/k8s/work
export node_ip=192.168.0.114
scp kube-proxy.kubeconfig root@${node_ip}:/etc/kubernetes/kube-proxy.kubeconfig
cd /opt/k8s/work
export CLUSTER_CIDR="172.30.0.0/16"
export NODE_IP=192.168.0.114
export NODE_NAME=slave
cat > kube-proxy-config.yaml
cd /opt/k8s/work
export node_ip=192.168.0.114
scp kube-proxy-config.yaml root@${node_ip}:/etc/kubernetes/kube-proxy-config.yaml
cd /opt/k8s/work
export K8S_DIR=/data/k8s/k8s
cat > kube-proxy.service
export node_ip=192.168.0.114
scp kube-proxy.service root@${node_ip}:/etc/systemd/system/
export node_ip=192.168.0.114
export K8S_DIR=/data/k8s/k8s
ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
ssh root@${node_ip} "modprobe ip_vs_rr"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
export node_ip=192.168.0.114
ssh root@${node_ip} "systemctl status kube-proxy |grep Active"
journalctl -u kube-proxy
root@slave:~# netstat -lnpt|grep kube-prox
tcp 0 0 192.168.0.114:10256 0.0.0.0:* LISTEN 23078/kube-proxy
tcp 0 0 192.168.0.114:10249 0.0.0.0:* LISTEN 23078/kube-proxy
root@slave:~# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.254.0.1:443 rr
-> 192.168.0.107:6443 Masq 1 0 0
文章标题:kubernetes(七)二进制安装-worker节点安装
文章链接:http://soscw.com/index.php/essay/70055.html