Prometheus, Grafana and Loki Installation (Kubeadm)
The following instructions can be used to install Prometheus, Grafana and Loki for a Kubeadmn installation.
Setup Metrics Server for KubeAdm
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml kubectl edit deploy metrics-server -n kube-system
add the following lines as part of the -args:
command:
- /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP |
Increase the number of open files allowed on the server
sudo vi /etc/security/limits.conf
add the following lines to the bottom of the file
* hard nofile 1000000 * soft nofile 1000000 |
Increase inotify limits
udo vi /etc/sysctl.conf
add the following lines to the end of the file
fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=524288 |
Apply changes
sudo sysctl --system
Setup Persistent Volumeās
Create Directories
sudo mkdir -p /data/prometheus/pv1 sudo mkdir -p /data/prometheus/pv2 sudo mkdir -p /data/prometheus/pv3 sudo mkdir -p /data/prometheus/pv4
Edit Permissions
sudo chown -R 65534:65534 /data/prometheus/pv1 sudo chown -R 65534:65534 /data/prometheus/pv2 sudo chown -R 65534:65534 /data/prometheus/pv3 sudo chown -R 65534:65534 /data/prometheus/pv4 | sudo chmod -R 777 /data/prometheus/pv4
Create PVās
vi prometheus-pv.yaml
#Prometheus-pv.yaml kind: PersistentVolume apiVersion: v1 metadata: name: pv1 spec: storageClassName: capacity: storage: 2Gi accessModes: - ReadWriteOnce hostPath: path: "/data/prometheus/pv1" --- kind: PersistentVolume apiVersion: v1 metadata: name: pv2 spec: storageClassName: capacity: storage: 8Gi accessModes: - ReadWriteOnce hostPath: path: "/data/prometheus/pv2" --- |
kubectl create namespace monitoring
kubectl create -f prometheus-pv.yaml -n monitoring
Add Prometheus Community Helm repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update
Create a prometheus.yaml file using the example below:
Install Prometheus Helm Chart using the prometheus.yaml file you created above
helm upgrade -i prometheus prometheus-community/prometheus -f prometheus.yaml -n monitoring
Internal DNS name that the k8s cluster will use to access the prometheus server: prometheus-server.monitoring.svc.cluster.local
Setup Ingress for external access to the Prometheus server
Create the following Ingress yaml file
vi prometheus-ingress.yaml
#prometheus-ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: prometheus-ingress namespace: monitoring spec: ingressClassName: nginx rules: - host: prometheus.testmachine.com http: paths: - path: / pathType: Prefix backend: service: name: prometheus-server port: number: 9090 |
kubectl apply -f prometheus-ingress.yaml
Grafana and Loki Setup
Create and apply PV
vi grafana-pv.yaml
#grafana-pv.yaml kind: PersistentVolume apiVersion: v1 metadata: name: grafana spec: storageClassName: capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/data/prometheus/pv3" |
kubectl apply -f grafana-pv.yaml
Create and apply PVC
vi grafana-pvc.yaml
#grafana-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: meta.helm.sh/release-name: grafana meta.helm.sh/release-namespace: monitoring finalizers: - kubernetes.io/pvc-protection labels: app.kubernetes.io/instance: grafana app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: grafana app.kubernetes.io/version: 10.3.1 helm.sh/chart: grafana-7.3.0 name: grafana namespace: monitoring spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: volumeMode: Filesystem volumeName: grafana |
kubectl apply -f grafana-pvc.yaml
Add Grafana Helm repo
helm repo add grafana https://grafana.github.io/helm-charts helm repo update
Install Grafana Helm Chart
helm install grafana grafana/grafana --set persistence.enabled=true --set persistence.existingClaim=grafana --namespace monitoring
Create and apply PV for Loki
vi loki-pv.yaml
# loki-pv.yaml kind: PersistentVolume apiVersion: v1 metadata: name: storage-loki-0 spec: storageClassName: capacity: storage: 10Gi accessModes: - ReadWriteMany hostPath: path: "/data/prometheus/pv4" |
kubectl apply -f loki-pv.yaml
Create and apply PVC for Loki
vi loki-pvc.yaml
#loki-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: storage-loki-0 namespace: monitoring spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: volumeMode: Filesystem volumeName: storage-loki-0 |
kubectl apply -f loki-pvc.yaml
Create custom values file for Loki
vi loki.yaml
# loki.yaml loki: commonConfig: replication_factor: 1 storage: type: 'filesystem' auth_enabled: false singleBinary: replicas: 1 |
helm install -f loki.yaml loki grafana/loki --version 5.43.6 -n monitoring
Install and setup PromTail
helm install promtail grafana/promtail --version 6.15.5 -n monitoring
Setup Ingress for external access to the Grafana server
Create the following Ingress yaml file
vi grafana-ingress.yaml
#grafana-ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: grafana-ingress namespace: monitoring spec: ingressClassName: nginx rules: - host: grafana.testmachine.com http: paths: - path: / pathType: Prefix backend: service: name: grafana port: number: 80 |
Remember to make sure that the host specified in line 10 above matches your configuration.
kubectl apply -f grafana-ingress.yaml
Retrieve Grafana password
kubectl get secret grafana -o jsonpath="{.data.admin-password}" -n monitoring | base64 -d ; echo
Open the Grafana server in your browser http://grafana.testmachine.com
Username: admin
Password: <as per command above>
On the Home Screen click on āadd your first datasourceā option
Select āPrometheusā and configure the Connection with the following URL http://prometheus-server.monitoring.svc.cluster.local
Then scroll to the bottom and click āsave and testā you should get a confirmation that it was successful
Click again on ādata sourceā on the left hand side of the screen
Click on āAdd new data sourceā
Select āLokiā and configure the Connection with the following URL http://loki.monitoring.svc.cluster.local:3100
Then scroll to the bottom and click āsave and testā you should get a confirmation that it was successful