跟着炎炎盐实践k8s---Kubernetes 1.16.10 二进制高可用集群部署之ETCD部署

2021-01-27 02:14

阅读:522

标签:server   ant   targe   kubectl   sage   ddr   字段   linux   out   

这一节我们在三个主节点部署高可用的etcd集群,官方建议5节点或者7节点,我们三节点也是够用的。我们开始吧!
  • 一、环境准备

      10.13.33.29  etcd-0
      10.13.33.40  etcd-1
      10.13.33.38  etcd-2

    etcd 数据目录:ETCD_DATA_DIR="/data/k8s/etcd/data"

    etcd WAL 目录:ETCD_WAL_DIR="/data/k8s/etcd/wal"
    (建议是 SSD 磁盘分区,或者和 ETCD_DATA_DIR 不同的磁盘分区)

  • 二、部署安装ETCD

1、创建etcd相关证书

cd /opt/k8s/work
cat > etcd-csr.json 

###生成证书和私钥

cfssl gencert -ca=/opt/k8s/work/ca.pem     -ca-key=/opt/k8s/work/ca-key.pem     -config=/opt/k8s/work/ca-config.json     -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
ls etcd*pem

###分发到etcd节点

mkdir -p /etc/etcd/cert
cp etcd*.pem /etc/etcd/cert/

2、安装etcd

wget https://github.com/etcd-io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz         ##下载etcd组件
tar -xvf etcd-v3.3.13-linux-amd64.tar.gz                                                                                                  ##解压 
cp etcd-v3.3.13-linux-amd64/etcd* /opt/k8s/bin/                                                                                     ##复制etcd命令到指定目录
chmod +x /opt/k8s/bin/*                                                                                                                          ##赋权
mkdir -p /data/k8s/etcd/data /data/k8s/etcd/wal                                                                                     ##创建etcd数据目录

3、创建etcd.service

cat > etcd.service.template 

3、启动etcd

systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd && systemctl status etcd|grep Active
curl -L 127.0.0.1:2379/health          ###检查启动状态

4、确认集群健康

[root@master-01 work]# curl -L 127.0.0.1:2379/health   
{"health":"true"}
[root@master-01 work]# ETCDCTL_API=3 /opt/k8s/bin/etcdctl   -w table --cacert=/etc/kubernetes/cert/ca.pem   --cert=/etc/etcd/cert/etcd.pem   --key=/etc/etcd/cert/etcd-key.pem   --endpoints=https://10.13.33.29:2379,https://10.13.33.38:2379,https://10.13.33.40:2379  endpoint status
+--------------------------+------------------+---------+---------+-----------+-----------+------------+
|         ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+--------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://10.13.33.29:2379 | 3d57cc849d2a16** |  3.3.13 |   33 MB |      true |     36971 |    6201481 |
| https://10.13.33.38:2379 | bff9394ca77d77** |  3.3.13 |   33 MB |     false |     36971 |    6201481 |
| https://10.13.33.40:2379 | 2ce7325ad513f9** |  3.3.13 |   33 MB |     false |     36971 |    6201481 |
+--------------------------+------------------+---------+---------+-----------+-----------+------------+
[root@master-01 work]# ETCDCTL_API=3 /opt/k8s/bin/etcdctl   -w table --cacert=/etc/kubernetes/cert/ca.pem   --cert=/etc/etcd/cert/etcd.pem   --key=/etc/etcd/cert/etcd-key.pem   --endpoints=https://10.13.33.29:2379,https://10.13.33.38:2379,https://10.13.33.40:2379  member list
+------------------+---------+-----------+--------------------------+--------------------------+
|        ID        | STATUS  |   NAME    |        PEER ADDRS        |       CLIENT ADDRS       |
+------------------+---------+-----------+--------------------------+--------------------------+
| 2ce7325ad513f9** | started | master-03 | https://10.13.33.40:2380 | https://10.13.33.40:2379 |
| 3d57cc849d2a16** | started | master-02 | https://10.13.33.29:2380 | https://10.13.33.29:2379 |
| bff9394ca77d77** | started | master-01 | https://10.13.33.38:2380 | https://10.13.33.38:2379 |
+------------------+---------+-----------+--------------------------+--------------------------+
{"health":"true"}[root@ETCDCTL_API=3 /opt/k8s/bin/etcdctl   -w table --cacert=/etc/kubernetes/cert/ca.pem   --cert=/etc/etcd/cert/etcd.pem   --key=/etc/etcd/cert/etcd-key.pem   --endpoints=https://10.13.33.29:2379,https://10.13.33.38:2379,https://10.13.33.40:2379  endpoint health
https://10.13.33.38:2379 is healthy: successfully committed proposal: took = 2.174634ms
https://10.13.33.29:2379 is healthy: successfully committed proposal: took = 1.465878ms
https://10.13.33.40:2379 is healthy: successfully committed proposal: took = 2.36525ms
安装kubectl后可以通过kubectl查看
[root@master-01 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   

5、向etcd写入POD网段信息

etcdctl   --endpoints=https://10.13.33.29:2379,https://10.13.33.38:2379,https://10.13.33.40:2379   --ca-file=/opt/k8s/work/ca.pem   --cert-file=/opt/k8s/work/flanneld.pem   --key-file=/opt/k8s/work/flanneld-key.pem   mk /kubernetes/network/config ‘{"Network":"‘172.30.0.0/16‘", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}‘

6、重启flannel、docker

systemctl restart flanneld docker
systemctl status flanneld docker | grep Active
ip addr show| grep flannel.1

7、检查flannel、docker是否获取到正确的子网

[root@master-01 work]# ip addr show|grep flannel.1
5: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN group default 
    inet 172.30.48.0/32 scope global flannel.1
[root@master-01 work]# ip addr show|grep docker 
6: docker0:  mtu 1450 qdisc noqueue state UP group default 
    inet 172.30.48.1/21 brd 172.30.55.255 scope global docker0

跟着炎炎盐实践k8s---Kubernetes 1.16.10 二进制高可用集群部署之ETCD部署

标签:server   ant   targe   kubectl   sage   ddr   字段   linux   out   

原文地址:https://blog.51cto.com/13534471/2507969


评论


亲,登录后才可以留言!