Kubernetes集群部署Prometheus和Grafana
2021-07-21 22:55
标签:protocol heapster 间隔 instead worker target git rod bash 实验环境规划: node-exporter可以采集机器(物理机、虚拟机、云主机等)的监控指标数据,能够采集到的指标包括CPU, 内存,磁盘,网络,文件数等信息。 1)创建sa账号,对sa做rbac授权 2)创建prometheus数据存储目录 3)创建一个configmap存储卷,用来存放prometheus配置信息 配置详解: 4)通过deployment部署prometheus 5)给prometheus pod创建一个service 点击页面的 1)展示方式:快速灵活的客户端图表,面板插件有许多不同方式的可视化指标和日志,官方库中具有丰富的仪表盘插件,比如热图、折线图、图表等多种展示方式; 2)数据源: 3)通知提醒:以可视方式定义最重要指标的警报规则,Grafana将不断计算并发送通知,在数据达到阈值时通过Slack、PagerDuty等获得通知; 4)混合展示:在同一图表中混合使用不同的数据源,可以基于每个查询指定数据源,甚至自定义数据源; 5)注释:使用来自不同数据源的丰富事件注释图表,将鼠标悬停在事件上会显示完整的事件元数据和标记。 1)登陆grafana,在浏览器访问http://192.168.40.180:31715 2)开始配置grafana的web界面:选择Create your first data source 官方链接搜索:https://grafana.com/dashboards?dataSource=prometheus&search=kubernetes 点击左侧+号下面的 点击左侧+号下面的 Kubernetes集群部署Prometheus和Grafana 标签:protocol heapster 间隔 instead worker target git rod bash 原文地址:https://www.cnblogs.com/hujinzhong/p/14999877.htmlKubernetes集群部署Prometheus和Grafana
一、环境规划
K8S集群角色
Ip
主机名
控制节点
192.168.40.180
k8s-master1
工作节点
192.168.40.181
k8s-node1
工作节点
192.168.40.182
k8s-node2
v1.20.6
[root@k8s-master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane,master 2d23h v1.20.6
k8s-node1 Ready worker 2d23h v1.20.6
k8s-node2 Ready worker 2d23h v1.20.6
二、node-exporter安装和配置
2.1、node-exporter介绍
2.2、node-exporter安装
# 创建监控namespace
[root@k8s-master1 prometheus]# kubectl create ns monitor-sa
namespace/monitor-sa created
# 创建node-export.yaml
[root@k8s-master1 prometheus]# cat node-export.yaml
apiVersion: apps/v1
kind: DaemonSet # 可以保证k8s集群的每个节点都运行完全一样的pod
metadata:
name: node-exporter
namespace: monitor-sa
labels:
name: node-exporter
spec:
selector:
matchLabels:
name: node-exporter
template:
metadata:
labels:
name: node-exporter
spec:
hostPID: true
hostIPC: true
hostNetwork: true
containers:
- name: node-exporter
image: prom/node-exporter:v0.16.0
ports:
- containerPort: 9100
resources:
requests:
cpu: 0.15 # 这个容器运行至少需要0.15核cpu
securityContext:
privileged: true # 开启特权模式
args:
- --path.procfs
- /host/proc
- --path.sysfs
- /host/sys
- --collector.filesystem.ignored-mount-points
- ‘"^/(sys|proc|dev|host|etc)($|/)"‘
volumeMounts:
- name: dev
mountPath: /host/dev
- name: proc
mountPath: /host/proc
- name: sys
mountPath: /host/sys
- name: rootfs
mountPath: /rootfs
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
volumes:
- name: proc
hostPath:
path: /proc
- name: dev
hostPath:
path: /dev
- name: sys
hostPath:
path: /sys
- name: rootfs
hostPath:
path: /
# hostNetwork、hostIPC、hostPID都为True时,表示这个Pod里的所有容器,会直接使用宿主机的网络,直接与宿主机进行IPC(进程间通信)通信,可以看到宿主机里正在运行的所有进程。加入了hostNetwork:true会直接将我们的宿主机的9100端口映射出来,从而不需要创建service 在我们的宿主机上就会有一个9100的端口
# 更新node-exporter.yaml文件
[root@k8s-master1 prometheus]# kubectl apply -f node-export.yaml
# 查看node-exporter是否部署成功
[root@k8s-master1 prometheus]# kubectl get pods -n monitor-sa -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-exporter-nl5qz 1/1 Running 0 13s 192.168.40.181 k8s-node1
三、Prometheus安装和配置
3.1、Prometheus安装
# 创建一个sa账号monitor
[root@k8s-master1 prometheus]# kubectl create serviceaccount monitor -n monitor-sa
# 把sa账号monitor通过clusterrolebing绑定到clusterrole上
[root@k8s-master1 prometheus]# kubectl create clusterrolebinding monitor-clusterrolebinding -n monitor-sa --clusterrole=cluster-admin --serviceaccount=monitor-sa:monitor
# 将prometheus调度到k8s-node1节点
[root@k8s-node1 ~]# mkdir /data && chmod 777 /data
[root@k8s-master1 prometheus]# cat prometheus-cfg.yaml
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
app: prometheus
name: prometheus-config
namespace: monitor-sa
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_timeout: 10s
evaluation_interval: 1m
scrape_configs:
- job_name: ‘kubernetes-node‘
kubernetes_sd_configs:
- role: node
relabel_configs:
- source_labels: [__address__]
regex: ‘(.*):10250‘
replacement: ‘${1}:9100‘
target_label: __address__
action: replace
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- job_name: ‘kubernetes-node-cadvisor‘
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: ‘kubernetes-apiserver‘
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: ‘kubernetes-service-endpoints‘
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
[root@k8s-master1 prometheus]# kubectl apply -f prometheus-cfg.yaml
configmap/prometheus-config created
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
app: prometheus
name: prometheus-config
namespace: monitor-sa
data:
prometheus.yml: |
global:
scrape_interval: 15s #采集目标主机监控据的时间间隔
scrape_timeout: 10s # 数据采集超时时间,默认10s
evaluation_interval: 1m #触发告警检测的时间,默认是1m
scrape_configs: # 配置数据源,称为target,每个target用job_name命名。又分为静态配置和服务发现
- job_name: ‘kubernetes-node‘
kubernetes_sd_configs: # 使用的是k8s的服务发现
- role: node # 使用node角色,它使用默认的kubelet提供的http端口来发现集群中每个node节点
relabel_configs: # 重新标记
- source_labels: [__address__] # 配置的原始标签,匹配地址
regex: ‘(.*):10250‘ #匹配带有10250端口的url
replacement: ‘${1}:9100‘ #把匹配到的ip:10250的ip保留
target_label: __address__ #新生成的url是${1}获取到的ip:9100
action: replace # 动作替换
- action: labelmap
regex: __meta_kubernetes_node_label_(.+) #匹配到下面正则表达式的标签会被保留,如果不做regex正则的话,默认只是会显示instance标签
- job_name: ‘kubernetes-node-cadvisor‘ # 抓取cAdvisor数据,是获取kubelet上/metrics/cadvisor接口数据来获取容器的资源使用情况
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap # 把匹配到的标签保留
regex: __meta_kubernetes_node_label_(.+) #保留匹配到的具有__meta_kubernetes_node_label的标签
- target_label: __address__ # 获取到的地址:__address__="192.168.40.180:10250"
replacement: kubernetes.default.svc:443 # 把获取到的地址替换成新的地址kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+) # 把原始标签中__meta_kubernetes_node_name值匹配到
target_label: __metrics_path__ #获取__metrics_path__对应的值
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# 把metrics替换成新的值api/v1/nodes/k8s-master1/proxy/metrics/cadvisor
# ${1}是__meta_kubernetes_node_name获取到的值
# 新的url就是https://kubernetes.default.svc:443/api/v1/nodes/k8s-master1/proxy/metrics/cadvisor
- job_name: ‘kubernetes-apiserver‘
kubernetes_sd_configs:
- role: endpoints # 使用k8s中的endpoint服务发现,采集apiserver 6443端口获取到的数据
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
# endpoint这个对象的名称空间,endpoint对象的服务名,exnpoint的端口名称
action: keep # 采集满足条件的实例,其他实例不采集
regex: default;kubernetes;https #正则匹配到的默认空间下的service名字是kubernetes,协议是https的endpoint类型保留下来
- job_name: ‘kubernetes-service-endpoints‘
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
# 重新打标仅抓取到的具有 "prometheus.io/scrape: true" 的annotation的端点,意思是说如果某个service具有prometheus.io/scrape = true annotation声明则抓取,annotation本身也是键值结构,所以这里的源标签设置为键,而regex设置值true,当值匹配到regex设定的内容时则执行keep动作也就是保留,其余则丢弃。
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
# 重新设置scheme,匹配源标签__meta_kubernetes_service_annotation_prometheus_io_scheme也就是prometheus.io/scheme annotation,如果源标签的值匹配到regex,则把值替换为__scheme__对应的值。
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
# 应用中自定义暴露的指标,也许你暴露的API接口不是/metrics这个路径,那么你可以在这个POD对应的service中做一个"prometheus.io/path = /mymetrics" 声明,上面的意思就是把你声明的这个路径赋值给__metrics_path__,其实就是让prometheus来获取自定义应用暴露的metrices的具体路径,不过这里写的要和service中做好约定,如果service中这样写 prometheus.io/app-metrics-path: ‘/metrics‘ 那么你这里就要__meta_kubernetes_service_annotation_prometheus_io_app_metrics_path这样写。
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
# 暴露自定义的应用的端口,就是把地址和你在service中定义的 "prometheus.io/port =
[root@k8s-master1 prometheus]# cat prometheus-deploy.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-server
namespace: monitor-sa
labels:
app: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
component: server
#matchExpressions:
#- {key: app, operator: In, values: [prometheus]}
#- {key: component, operator: In, values: [server]}
template:
metadata:
labels:
app: prometheus
component: server
annotations:
prometheus.io/scrape: ‘false‘
spec:
nodeName: k8s-node1 # 指定pod调度到哪个节点上
serviceAccountName: monitor
containers:
- name: prometheus
image: prom/prometheus:v2.2.1
imagePullPolicy: IfNotPresent
command:
- prometheus
- --config.file=/etc/prometheus/prometheus.yml
- --storage.tsdb.path=/prometheus # 数据存储目录
- --storage.tsdb.retention=720h # 数据保存时长
- --web.enable-lifecycle # 开启热加载
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- mountPath: /etc/prometheus/prometheus.yml
name: prometheus-config
subPath: prometheus.yml
- mountPath: /prometheus/
name: prometheus-storage-volume
volumes:
- name: prometheus-config
configMap:
name: prometheus-config
items:
- key: prometheus.yml
path: prometheus.yml
mode: 0644
- name: prometheus-storage-volume
hostPath:
path: /data
type: Directory
[root@k8s-master1 prometheus]# kubectl apply -f prometheus-deploy.yaml
deployment.apps/prometheus-server created
[root@k8s-master1 prometheus]# kubectl get pods -o wide -n monitor-sa
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-exporter-nl5qz 1/1 Running 0 38m 192.168.40.181 k8s-node1
[root@k8s-master1 prometheus]# cat prometheus-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: monitor-sa
labels:
app: prometheus
spec:
type: NodePort
ports:
- port: 9090
targetPort: 9090
protocol: TCP
selector:
app: prometheus
component: server
[root@k8s-master1 prometheus]# kubectl apply -f prometheus-svc.yaml
service/prometheus created
# 查看service在物理机映射的端口
[root@k8s-master1 prometheus]# kubectl get svc -n monitor-sa
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus NodePort 10.99.104.223
Status->Targets
,可看到如下,说明我们配置的服务发现可以正常采集数据3.2、Prometheus热加载
# 为了每次修改配置文件可以热加载prometheus,也就是不停止prometheus,就可以使配置生效,想要使配置生效可用如下热加载命令:
[root@k8s-master1 prometheus]# kubectl get pods -n monitor-sa -o wide -l app=prometheus
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
prometheus-server-689fb8cdbc-kcsw2 1/1 Running 0 5m39s 10.244.36.70 k8s-node1
# 热加载速度比较慢,可以暴力重启prometheus,如修改上面的prometheus-cfg.yaml文件之后,可执行如下强制删除:
[root@k8s-master1 prometheus]# kubectl delete -f prometheus-cfg.yaml
[root@k8s-master1 prometheus]# kubectl delete -f prometheus-deploy.yaml
# 然后再通过apply更新:
[root@k8s-master1 prometheus]# kubectl apply -f prometheus-cfg.yaml
[root@k8s-master1 prometheus]# kubectl apply -f prometheus-deploy.yaml
#注意:线上最好热加载,暴力删除可能造成监控数据的丢失
四、Grafana的安装和配置
4.1、Grafana介绍
Grafana
是一个跨平台的开源的度量分析和可视化工具,可以将采集的数据可视化的展示,并及时通知给告警接收方。它主要有以下六大特点:Graphite,InfluxDB,OpenTSDB,Prometheus,Elasticsearch,CloudWatch和KairosDB
等;4.2、Grafana安装
# 准备yaml文件
[root@k8s-master1 prometheus]# cat grafana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: grafana
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
containers:
- name: grafana
image: hujinzhong/heapster-grafana-amd64:v5.0.4
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certificates
readOnly: true
- mountPath: /var
name: grafana-storage
env:
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: GF_SERVER_HTTP_PORT
value: "3000"
# The following env variables are required to make Grafana accessible via
# the kubernetes api-server proxy. On production clusters, we recommend
# removing these env variables, setup auth for grafana, and expose the grafana
# service using a LoadBalancer or a public IP.
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
# If you‘re only using the API Server proxy, set this value instead:
# value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
value: /
volumes:
- name: ca-certificates
hostPath:
path: /etc/ssl/certs
- name: grafana-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: ‘true‘
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: kube-system
spec:
# In a production setup, we recommend accessing Grafana through an external Loadbalancer
# or through a public IP.
# type: LoadBalancer
# You could also use NodePort to expose the service at a randomly-generated port
# type: NodePort
ports:
- port: 80
targetPort: 3000
selector:
k8s-app: grafana
type: NodePort
# 更新yaml文件:
[root@k8s-master1 prometheus]# kubectl apply -f grafana.yaml
# 查看grafana是否创建成功:
[root@k8s-master1 prometheus]# kubectl get pods -n kube-system -l task=monitoring -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
monitoring-grafana-675798bf47-z9dpx 1/1 Running 0 19s 10.244.169.135 k8s-node2
4.3、配置Grafana
4.4、导入监控模板
4.4.1、监控node状态
Import
,导入node_exporter.json
模板4.4.2、监控容器状态
Import
,导入docker_rev1.json
模板
文章标题:Kubernetes集群部署Prometheus和Grafana
文章链接:http://soscw.com/essay/106825.html