kubernetes1.8 部署安装

2021-05-18 04:27

阅读:604

标签:dashboard   kubernetes   

环境说明:

我这里的部署环境是三台虚拟机

master:172.17.80.10、node01:172.17.80.11、node02:172.17.80.12

Linux系统内核为:3.10.0-327.el7.x86_64   kubernetes版本:1.8

因需要上google,download image和kubernetes软件包,我事先己经使用proxy下载好全部打包成(kubernetes-all-1.8.tar.gz)软件包,里面包含配置文件、Docker软件包、kubernetes软件包以及下面的镜像

docker_soft:docker安装所需要软件包
images:镜像文件
k8s_soft:k8s软件包
yaml:部署时所需要的配置文件

软件包己上传至baidu云盘,下载地址:http://pan.baidu.com/s/1slOCHop 密码:cm1o


k8s所使用的镜像以及版本号:

gcr.io/google_containers/kube-apiserver-amd64  v1.8.2        
gcr.io/google_containers/kube-controller-manager-amd64  v1.8.2        
gcr.io/google_containers/kube-scheduler-amd64 v1.8.2        
gcr.io/google_containers/kube-proxy-amd64  v1.8.2        
gcr.io/google_containers/kubernetes-dashboard-init-amd64 v1.0.1        
gcr.io/google_containers/kubernetes-dashboard-amd64  v1.7.1        
gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.5        
gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.5        
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.5        
quay.io/coreos/flannel   v0.9.0-amd64  
gcr.io/google_containers/heapster-influxdb-amd64  v1.3.3        
gcr.io/google_containers/heapster-grafana-amd64  v4.4.3        
gcr.io/google_containers/heapster-amd64  v1.4.0        
gcr.io/google_containers/etcd-amd64  3.0.17        
gcr.io/google_containers/pause-amd64 3.0

      


下面开始部署操作

1.配置系统环境

[root@master ~]#setenforce 0 &&iptables -F && service iptables save 
[root@master ~]#swapoff -a
[root@master ~]#cat   /etc/sysctl.d/k8s.conf
[root@master ~]#net.bridge.bridge-nf-call-ip6tables = 1
[root@master ~]#net.bridge.bridge-nf-call-iptables = 1
[root@master ~]#EOF
[root@master ~]#sysctl --system

2.解压软件kubernetes软件包、安装Docker

[root@master ~]#tar xf kubernetes-all-1.8.tar.gz
[root@master ~]#cd kubernetes-all-1.8
[root@master kubernetes-all-1.8]# cd docker_soft/
[root@master ~]#yum localinstall -y *
[root@master ~]#systemctl enable docker && systemctl start docker

3.导入所需的镜像、查看是否正常

[root@master images]# cd /root/kubernetes-all-1.8/images
[root@master ~]#for i in `cat images.txt ` ; do docker load 

4.安装kubernetes软件包

[root@master k8s_soft]# cd /root/kubernetes-all-1.8/k8s_soft
[root@master ~]#yum localinstall -y kubelet kubeadm kubectl
[root@master ~]#systemctl enable kubelet && systemctl start kubelet

    
5.kubernetes初始化

[root@master ~]# kubeadm init --apiserver-advertise-address=172.17.80.10 --pod-network-cidr=10.244.0.0/16
    [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
    [init] Using Kubernetes version: v1.8.2
    [init] Using Authorization modes: [Node RBAC]
    [preflight] Skipping pre-flight checks
    [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
    [certificates] Using the existing ca certificate and key.
    [certificates] Using the existing apiserver certificate and key.
    [certificates] Using the existing apiserver-kubelet-client certificate and key.
    [certificates] Using the existing sa key.
    [certificates] Using the existing front-proxy-ca certificate and key.
    [certificates] Using the existing front-proxy-client certificate and key.
    [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
    [kubeconfig] Using existing up-to-date KubeConfig file: "admin.conf"
    [kubeconfig] Using existing up-to-date KubeConfig file: "kubelet.conf"
    [kubeconfig] Using existing up-to-date KubeConfig file: "controller-manager.conf"
    [kubeconfig] Using existing up-to-date KubeConfig file: "scheduler.conf"
    [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
    [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
    [init] This often takes around a minute; or longer if the control plane images have to be pulled.
    [apiclient] All control plane components are healthy after 25.003235 seconds
    [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [markmaster] Will mark node master.junly.com as master by adding a label and a taint
    [markmaster] Master master.junly.com tainted and labelled with key/value: node-role.kubernetes.io/master=""
    [bootstraptoken] Using token: 916ff9.96f48b52e66d9e03
    [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: kube-dns
    [addons] Applied essential addon: kube-proxy
    Your Kubernetes master has initialized successfully!
    To start using your cluster, you need to run (as a regular user):
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      http://kubernetes.io/docs/admin/addons/
    You can now join any number of machines by running the following on each node
    as root:
      kubeadm join --token 916ff9.96f48b52e66d9e03 172.17.80.10:6443 --discovery-token-ca-cert-hash sha256:2ae7f364929e442ed04bb1e0af840a343bb1efb356c5301ae7aed566b1f30d40

 6.允许远程操作集群

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

 
7.踢除master,不允许在master上进行部署服务

[root@master kubernetes-images-1.8]# kubectl taint nodes --all node-role.kubernetes.io/master-
    node "master.junly.com" untainted

 
8.安装flanner网络

[root@master ~]# kubectl create -f kube-flannel.yml 
    clusterrole "flannel" created
    clusterrolebinding "flannel" created
    serviceaccount "flannel" created
    configmap "kube-flannel-cfg" created
    daemonset "kube-flannel-ds" created

9.查看部署是否正常

[root@master ~]# kubectl get pods --all-namespaces
    NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE
    kube-system   etcd-master.junly.com                      1/1       Running   0          4m
    kube-system   kube-apiserver-master.junly.com            1/1       Running   0          4m
    kube-system   kube-controller-manager-master.junly.com   1/1       Running   1          4m
    kube-system   kube-dns-545bc4bfd4-nmhwl                  3/3       Running   0          5m
    kube-system   kube-flannel-ds-5mkm7                      1/1       Running   0          52s
    kube-system   kube-proxy-lmhzr                           1/1       Running   0          5m
    kube-system   kube-scheduler-master.junly.com            1/1       Running   0          4m

10.安装部署node节点
    在所有节点上按照前4步进行操作之后,加入集群

[root@node01 ~]#kubeadm join --token 916ff9.96f48b52e66d9e03 172.17.80.10:6443 --discovery-token-ca-cert-hash sha256:2ae7f364929e442ed04bb1e0af840a343bb1efb356c5301ae7aed566b1f30d40

11.部署Dashboard

[root@master kubernetes-images-1.8]# kubectl create -f kubernetes-dashboard.yaml 
    secret "kubernetes-dashboard-certs" created
    serviceaccount "kubernetes-dashboard" created
    role "kubernetes-dashboard-minimal" created
    rolebinding "kubernetes-dashboard-minimal" created
    deployment "kubernetes-dashboard" created
    service "kubernetes-dashboard" created
 
[root@master ~]# kubectl get pods -n kube-system
    NAME                                       READY     STATUS    RESTARTS   AGE
    etcd-master.junly.com                      1/1       Running   0          10m
    kube-apiserver-master.junly.com            1/1       Running   0          10m
    kube-controller-manager-master.junly.com   1/1       Running   1          10m
    kube-dns-545bc4bfd4-nmhwl                  3/3       Running   0          11m
    kube-flannel-ds-5mkm7                      1/1       Running   0          6m
    kube-flannel-ds-l9xvp                      1/1       Running   0          1m
    kube-flannel-ds-v6hht                      1/1       Running   0          1m
    kube-proxy-4xgj8                           1/1       Running   0          1m
    kube-proxy-b72xm                           1/1       Running   0          1m
    kube-proxy-lmhzr                           1/1       Running   0          11m
    kube-scheduler-master.junly.com            1/1       Running   0          10m
    kubernetes-dashboard-747c4f7cf-9v9t8       1/1       Running   0          10s

12.将dashboard端口映射到node上

[root@master ~]# kubectl edit service kubernetes-dashboard  -n kube-system service "kubernetes-dashboard" edited
    # Please edit the object below. Lines beginning with a ‘#‘ will be ignored,
    # and an empty file will abort the edit. If an error occurs while saving this file will be
    # reopened with the relevant failures.
    #
    apiVersion: v1
    kind: Service
    metadata:
      creationTimestamp: 2017-10-26T03:10:16Z
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kube-system
      resourceVersion: "1334"
      selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
      uid: 31166784-b9fb-11e7-abe1-000c29c7c723
    spec:
      clusterIP: 10.96.47.166
      externalTrafficPolicy: Cluster
      ports:
      - nodePort: 31334
        port: 443
        protocol: TCP
        targetPort: 8443
      selector:
        k8s-app: kubernetes-dashboard
      sessionAffinity: None
      type: NodePort            #修改此处将Cluster改NodePort
    status:
      loadBalancer: {}

13.查看映射出来的端口

[root@master kubernetes-images-1.8]# kubectl get service kubernetes-dashboard  -n kube-system
    NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
    kubernetes-dashboard   NodePort   10.96.47.166           443:31334/TCP   53s

14.部署dashboard rbac

[root@master ~]# cd /root/kubernetes-all-1.8/yaml
[root@master yaml]# kubectl create -f kubernetes-dashboard-admin.rbac.yaml
    serviceaccount "kubernetes-dashboard-admin" created
    clusterrolebinding "kubernetes-dashboard-admin" created

15.使用浏览器访问ui,会自动跳到登录页面
    

https://172.17.80.11:31334


16.使用token来登录

[root@master yaml]# kubectl -n kube-system get secret | grep kubernetes-dashboard-admin
    kubernetes-dashboard-admin-token-2p6dj   kubernetes.io/service-account-token   3         3h
[root@master yaml]# 
[root@master yaml]#  kubectl describe -n kube-system secret/kubernetes-dashboard-admin-token-2p6dj
    Name:         kubernetes-dashboard-admin-token-2p6dj
    Namespace:    kube-system
    Labels:       
    Annotations:  kubernetes.io/service-account.name=kubernetes-dashboard-admin
                  kubernetes.io/service-account.uid=6e35bbd8-b9fc-11e7-abe1-000c29c7c723
    Type:  kubernetes.io/service-account-token
    Data
    ========复制下面的token内容进行登录,不要复制token:==========
    token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi0ycDZkaiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjZlMzViYmQ4LWI5ZmMtMTFlNy1hYmUxLTAwMGMyOWM3YzcyMyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.Jy-hQuDL_2tgEtw1_Aaf2SHZ3-dXpH5sNqhuhqYkDnZElFO_vatJfwUM0CvTZGC0EDggKEVLwNjboMJDpDrdhshXUfYI0qK4PaFKkZWmTWZNBrL58qFDKQZ3-lDotwrMcI8xABkLuCiHLqi7mHSpvk1kIIUP4vTwx7QulOZmsHHuLUpz8nBOcGK7CiqKCQQZfWPkU_7OSC5_ECBIZXFU1T3OmqhwZPtYSo6183vsJmn6HvwT2RhFn2mkasO6YD2a-g_SzxvgW6uj0YOFzJVssGVQk0OjDPRL8ytaQiq_bZF6tDh6gh6e7UzLO6uzQhYonot2vNxRCUBUES_3DQsslg
    ca.crt:     1025 bytes
    namespace:  11 bytes

17.部署heapster

[root@master ~]# cd /root/kubernetes-all-1.8/yaml
[root@master yaml]# ls
    grafana.yaml  heapster.yaml  influxdb.yaml
[root@master heapster]# kubectl create -f . 
    deployment "monitoring-grafana" created
    service "monitoring-grafana" created
    serviceaccount "heapster" created
    deployment "heapster" created
    service "heapster" created
    deployment "monitoring-influxdb" created
    service "monitoring-influxdb" created
[root@master ~]# kubectl -n kube-system get pods
    NAME                                       READY     STATUS    RESTARTS   AGE
    etcd-master.junly.com                      1/1       Running   1          3h
    heapster-5d67855584-xbkxp                  1/1       Running   0          2h
    kube-apiserver-master.junly.com            1/1       Running   4          3h
    kube-controller-manager-master.junly.com   1/1       Running   4          3h
    kube-dns-545bc4bfd4-nmhwl                  3/3       Running   6          3h
    kube-flannel-ds-5mkm7                      1/1       Running   1          3h
    kube-flannel-ds-l9xvp                      1/1       Running   0          2h
    kube-flannel-ds-v6hht                      1/1       Running   0          2h
    kube-proxy-4xgj8                           1/1       Running   0          2h
    kube-proxy-b72xm                           1/1       Running   0          2h
    kube-proxy-lmhzr                           1/1       Running   1          3h
    kube-scheduler-master.junly.com            1/1       Running   3          3h
    kubernetes-dashboard-747c4f7cf-9v9t8       1/1       Running   0          2h
    monitoring-influxdb-85cb4985d4-7t2p9       1/1       Running   5          2h

以上是部署整个过程,因只是刚开始学习,有什么问题请直接留言一起学习进步;

本文出自 “junly” 博客,请务必保留此出处http://junly917.blog.51cto.com/2846717/1976424

kubernetes1.8 部署安装

标签:dashboard   kubernetes   

原文地址:http://junly917.blog.51cto.com/2846717/1976424


评论


亲,登录后才可以留言!