Home / How To / Use Node Affinity in Kubernetes

Use Node Affinity in Kubernetes



Node affinity is a set of rules. It is used by the scheduler to determine where a pod can be placed in the cluster. The rules are defined using labels on nodes and label selectors specified in the pod definition. Node affinity allows a pod to specify an affinity for a group of nodes it can be scheduled on. We can restrict a Pod so that it can only be run on a specific node (s).

nodeSelector is the simplest form of node selection restriction. nodeSelector is a property of PodSpec. In order for the pod to run on a node, the node must have each of the specified labels.

Node affinity is conceptually similar to nodeSelector – this allows us to limit which nodes our pod is entitled to be scheduled on, based on labels on the node.

There are currently two types of node affinity,

  1. requiredDuringSchedulingIgnoredDuringExecution and
  2. preferDuringSchedulingIgnoredDuringExecution.

What’s under planning

  • Here the pod is not yet created and will be created for the first time.
  • When the pod is created, the rules of affinity are usually applied.

What is UnderExecution

  • Here the pod has been run and the change is made in the environment that affects nodeAffinity.

To know the Node Affinity in detail, visit kubernete.io the official documentation for Kubernetes.

In this article we will see how to assign a Kubernetes Pod to a specific node using “requiredDuringSchedulingIgnoredDuringExecution” Node Affinity in a Kubernetes cluster.

Conditions

  1. Kubernetes Cluster with at least 1 worker node.
    Click here to learn how to create a Kubernetes cluster. This guide will help you create a Kubernetes cluster with 1 master and 2 nodes on AWS Ubuntu 18.04 EC2 instances.

What should we do?

  1. Configure Node-Affinity

Configure Node-Affinity

First of all, let’s get a list of available nodes in the cluster.

kubectl get nodes #Get all the nodes in the cluster

Check for nodes with stains.

kubectl describe node node01 | grep Taints #Describe the node node01 and grep Taints
kubectl describe node master | grep Taints #Describe the node master and grep Taints

describe-nodes-and-control-spots

Add a label to a worker node nod01.

kubectl label node node01 app=qa #Add a label

add-label-to-node1

Create a distribution definition file and add the following definition to it.

vim my-deployment-without-affinity.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-without-affinity
spec:
  replicas: 20
  selector:
    matchLabels:
      run: nginx
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx

distribution without affinity

Get a list of Pods and Deployments.

kubectl get pods #Get pods in the default namespace
kubectl get deployment #Get deployments in the default namespace

Create a distribution from the definition we created.

kubectl create -f my-deployment-without-affinity.yml #Create a deployment object
kubectl get deployment #Get deployments in the default namespace
kubectl get pods #Get pods in the default namespace

create-distribution-without-affinity

Get information about Pods created by the distribution.

Here you can see that Pods also get places in the master node. The reason for this is that the nodes do not have any spots on them so that pods can have places on any of the available nodes.

kubectl get pods -o wide #Get pods in the default namespace with more information about them using -o wide

check-nodes-on-which-pods-were-placed

Now create a distribution definition with node affinity defined.

vim my-deployment-with-affinity.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-with-afiinity
spec:
  replicas: 6
  selector:
    matchLabels:
      run: nginx
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: app
                operator: In
                values:
                - qa

distribution-with-affinity

Get a list of existing distribution and create a new distribution with affinity with the file created in the above steps.

kubectl get deployments #Get deployments in the default namespace
kubectl create -f my-deployment-with-affinity.yml #Create a deployment object
kubectl get deployments #Get deployments in the default namespace

create-distribution-with-affinityAdvertisement

Now it can be seen that Pods this time were only placed on node nod01 for workers. The reason for this is that we defined a node affinity in the distribution definition that ensures that the pods are distributed on nodes that match the defined state / label.

kubectl  get pods -o wide | grep app-with-afiinity #Get pods in the default namespace with more information about them using -o wide and grep app-with-afiinity

check-nodes-on-which-pods were placed

Conclusion

In this article, we learned how to add labels to nodes and saw how pods can be constrained to get scheduled nodes with Node Affinity. We also saw that the capsules can even be distributed on the master node if it has no color on it.


Source link