kubeadm部署kubernetes 1.12集群
2021-05-01 23:29
标签:网络 start 进入 就是 wing 使用 mkdir beta install kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。 在Kubernetes的文档Creating a single master cluster with kubeadm中已经给出了目前kubeadm的主要特性已经处于beta状态了,在2018年将进入GA状态,说明kubeadm离可以在生产环境中使用的距离越来越近了。 当然我们线上稳定运行的Kubernetes集群是使用ansible以二进制形式的部署的高可用集群,这里体验Kubernetes 1.12中的kubeadm是为了跟随官方对集群初始化和配置方面的最佳实践,进一步完善我们的ansible部署脚本。 hosts 禁用selinux,firewalld 内核配置 禁用swap 安装docker-ce 需要注意的是,Kubernetes 1.12已经针对Docker的1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06等版本做了验证,最低支持的Docker版本是1.11.1,最高支持是18.06,Docker最新版本已经是18.09了,故我们安装时需要指定版本为18.06.1-ce 查看扩展组件依赖版本 调整时区 每个节点进行安装 安装 设置kubelet开机启动 kubeadm配置文件 在kubeadm v1.11+版本中,增加了一个 镜像获取 初始化集群 重新初始化可以执行 就可以完成 Kubernetes Master 的部署了,部署完成后,kubeadm 会生成一行指令: 这个 kubeadm join 命令,就是用来给这个 Master节点添加更多工作节点(Worker)的命令 获取当前节点状态 NodeNotReady 的原因在于,我们尚未部署任何网络插件 部署之后状态 启动kubelet 启动kubelet 执行部署 Master 节点时生成的 kubeadm join 指令: 集群状态 添加角色标签 kubeadm部署kubernetes 1.12集群 标签:网络 start 进入 就是 wing 使用 mkdir beta install 原文地址:https://www.cnblogs.com/knmax/p/12141577.html系统环境准备
环境
ip
hostname
OS
k8s-role
192.168.2.45
k8s-master-45
centos 7
master
192.168.2.46
k8s-work-46
centos7
work
192.168.2.47
k8s-work-47
centos7
work
系统配置
cat /etc/hosts
192.168.2.45 k8s-master-45
192.168.2.46 k8s-work-46
192.168.2.47 k8s-work-47
# 禁用selinux
vi /etc/selinux/config
SELINUX=disabled
# 禁用firewalld
systemctl stop firewalld
systemctl disable firewalld
配置网桥参数,使得流量不会绕过iptable
# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# modprobe br_netfilter
# sysctl -p /etc/sysctl.d/k8s.conf
swapoff -a # 临时
# 永久,注释swap相关
vim /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0
mount -a
reboot
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce-18.06.1.ce-3.el7 -y
systemctl start docker
systemctl enable docker
时区不对,时间差较大,证书验证会不通过yum install ntpdate -y
ntpdate time.windows.com
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
使用kubeadm部署Kubernetes
安装kubeadm和kubelet
yum源# google
cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
#aliyun
cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet kubeadm kubectl ipvsadm
使用kubeadm init初始化集群
systemctl enable kubelet.service
kubeadm config print-default
命令,可以让我们方便的将kubeadm的默认配置输出至文件中,修改成自己想要的# cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
controllerManagerExtraArgs:
horizontal-pod-autoscaler-use-rest-clients: "true"
horizontal-pod-autoscaler-sync-period: "10s"
node-monitor-grace-period: "10s"
apiServerExtraArgs:
runtime-config: "api/all=true"
kubernetesVersion: "stable-1.12" # 指定版本v1.12.2
imageRepository: registry.aliyuncs.com/google_containers # 指定阿里云的镜像库
这种部署方式最蛋疼的地方就是拉取镜像了,主要是下面几个镜像,也可以从国内pull到本地重新打tagkubeadm config images pull --config kubeadm.yaml
registry.aliyuncs.com/google_containers/kube-apiserver:v1.12.2
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.12.2
registry.aliyuncs.com/google_containers/kube-scheduler:v1.12.2
registry.aliyuncs.com/google_containers/kube-proxy:v1.12.2
registry.aliyuncs.com/google_containers/pause:3.1
registry.aliyuncs.com/google_containers/etcd:3.2.24
registry.aliyuncs.com/google_containers/coredns:1.2.2
kubeadm init --config kubeadm.yaml
kubeadm reset
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.2.45:6443 --token 2qncip.5p98b3zuvi9rumwk --discovery-token-ca-cert-hash sha256:3227d728428eaba7145196d66dc954a554be6a3ae2d32d088632a8561602fd48
kubeadm 还会提示我们第一次使用 Kubernetes集群所需要的配置命令:mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-45 NotReady master 5m13s v1.12.2
通过 kubectl describe 指令的输出可以看到,CoreDNS、kube-controller-manager等依赖于网络的 Pod 都处于 Pending 状态,即调度失败。这当然是符合预期的因为这个 Master 节点的网络尚未就绪.部署网络插件
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-45 Ready master 6m29s v1.12.2
部署worker节点
pause镜像sudo docker pull registry.aliyuncs.com/google_containers/pause:3.1
sudo docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
sudo systemctl start kubelet
kubeadm join 192.168.2.45:6443 --token 2qncip.5p98b3zuvi9rumwk --discovery-token-ca-cert-hash sha256:3227d728428eaba7145196d66dc954a554be6a3ae2d32d088632a8561602fd48
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-45 Ready master 26m v1.12.2
k8s-work-46 Ready
kubectl label node k8s-work-46 node-role.kubernetes.io/worker=worker
kubectl label node k8s-work-47 node-role.kubernetes.io/worker=worker
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-45 Ready master 27m v1.12.2
k8s-work-46 Ready worker 10m v1.12.2
k8s-work-47 Ready worker 3m52s v1.12.2
上一篇:CSS 清除默认格式
下一篇:46、我的C#学习笔记12
文章标题:kubeadm部署kubernetes 1.12集群
文章链接:http://soscw.com/index.php/essay/81086.html