A DaemonSet ensures that all nodes in the Kubernetes cluster run a copy of a Pod. Each node has a copy of the podcast. When nodes are added to the cluster, pods are added to them and when nodes are removed from the cluster, pods are removed. If we remove a DaemonSet, it will clean up the Pods it created.
A DaemonSet ensures that all nodes run a copy of a Pod. Normally, the node on which a Pod is run is selected by the scheduler, but DaemonSet pods are created and scheduled by the DaemonSet controller.
The daemon set can be used:
- To run cluster storage on each node, for example: clustered, ceph
- To run log collection on each node, for example: fluentd, logstash
- To run node monitoring ever, for example: Prometheus Node Exporter, collectd, Datadog agent
To learn more about Daemonset, visit kubernetes.io the official documentation for Kubernetes.
In this article we will create a Daemonset of “fluentd_elasticsearch”. This creates Pods of “fluentd_elasticsearch” on each node in the cluster. Our Daemonset definition file will have Toleration for Taint of the main nodes so that the Pod can also be planned on the master node.
- Kubernetes Cluster with at least 1 worker node.
Click here to learn how to create a Kubernetes cluster. This guide will help you create a Kubernetes cluster with 1 master and 2 nodes on AWS Ubuntu 18.04 EC2 instances.
What should we do?
- Create a Daemonset
Create a Daemonset
Check for a daemonset in the default namespace and all namespaces.
kubectl get daemonsets #Get daemonsets from the default namespace
kubectl get daemonsets --all-namespaces #Get daemonsets from all namespace using --all-namespace option
In the screenshot above, you can see that there are some Daemonset available. All of these Daemonsets are for Cluster components.
Now get pods that belong to the name “cube system”.
kubectl get pods -n kube-system #Get pods from the "kube-system" namespace
All of these pods shown in the screenshot above belong to the Daemonset of cluster components.
Get a list of proxy pods.
kubectl get pods -n kube-system | grep proxy #Get pods from the "kube-system" namespace and grep for proxy
Check what controls proxy pods.
kubectl describe pod kube-proxy-s5vzp -n kube-system #Describe the pod from the "kube-system" namespace
Get information about the daemonset that controls the proxy disks.
kubectl describe daemonset kube-proxy -n kube-system #Describe the daemonset from the "kube-system" namespace
Create a file with the following daemonset definition.
apiVersion: apps/v1 kind: DaemonSet metadata: name: my-fluentd-elasticsearch-daemonset namespace: kube-system labels: k8s-app: fluentd-logging spec: selector: matchLabels: name: fluentd-elasticsearch template: metadata: labels: name: fluentd-elasticsearch spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: fluentd-elasticsearch image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2 resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers
In the definition above, we have a tolerance to the color of the main node. This means that the pod is also placed on the master node.
Create a daemon set using the definition file created in the steps above.
kubectl create -f my-daemonset.yml #Create a daemonset
kubectl get daemonset -n kube-system #Get daemonset from the "kube-system" namespace
This deamont set has been created in the “cube-system” namespace.
Describe the daemonset we just created in the “cube-system” namespace.
kubectl describe daemonset my-fluentd-elasticsearch-daemonset -n kube-system #Describe the daemonset from the "kube-system" namespace
In the screenshot above, it can be seen that Pods have been distributed on two nodes.
Now we can get information about the pods that are distributed as daemonsets on 2 nodes.
kubectl get pods -n kube-system | grep my-fluentd-elasticsearch-daemonset #Get pods from the "kube-system" namespace and grep
kubectl describe pod my-fluentd-elasticsearch-daemonset-4t9vs -n kube-system | grep Node #Describe the pods from the "kube-system" namespace and grep
kubectl describe pod my-fluentd-elasticsearch-daemonset-kxfjj -n kube-system | grep Node #Describe the pod from the "kube-system" namespace and grep
In the screenshot above, it can be seen that Pods have been distributed on worker node “nod01” and master node “master”. The reason why the pod is scheduled on the master node is Toleration to the Taint of the master node.
In this article, we looked at the steps for creating a daemonset and saw how podcasts in the daemonset are distributed on each node in the Kubernetes cluster.