Assigning Pods to Nodes
You can constrain a Pod to only be able to run on particular Node(s), or to prefer to run on particular nodes. There are several ways to do this, and the recommended approaches all use label selectors to make the selection. Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement (e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.) but there are some circumstances where you may want more control on a node where a pod lands, for example to ensure that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different services that communicate a lot into the same availability zone
nodeName
nodeName is the field in PodSpec.It specifies that a pod is to run on a particular node
Example: If you want to run a pod on worker node kwn1, then the pod creation script will be a mentioned below
Step1:- Create a file called nodeName.yaml
#nodeName.yaml
apiVersion: v1
kind: Pod
metadata:
name: podonkwn1
spec:
containers:
- name: nginx-container
image: nginx
nodeName: kwn1
Step2: Create the pod by running below command
kubectl create -f nodeName.yaml
Step3: Verify the pods are getting created on kwn1 or not by running below command
kubectl get pods -o wide
nodeSelector
nodeSelector is the simplest recommended form of node selection constraint. nodeSelector is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). The most common usage is one key-value pair.
Example: Create a Pod on the worker node which is of production environment, means the worker nodes which has label env=prod
Step1: Check the labels on all the nodes
kubectl get nodes --show-labels
Step2: Check the label on a specific node ( say kwn2)
kubectl get nodes --show-labels kwn2
Step3: Create a label env=prod for a worker node ( say kwn2)
kubectl label nodes kwn2 env=prod
Step4: Create a pod with nodeSelector specification. Create file with name nodeselector.yaml
#nodeselector.yaml
apiVersion: v1
kind: Pod
metadata:
name: podnodeselector
spec:
containers:
- name: container1
image: nginx
nodeSelector:
env: “prod”
Step5: Create the pod by running below command
kubectl create -f nodeselector.yaml
Step6: Verify the pod “podselector” is created on kwn2 by running below command
kubectl get pods -o wide
nodeAffinity
Node affinity is conceptually similar to nodeSelector
-- it allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node.
There are currently two types of node affinity, called requiredDuringSchedulingIgnoredDuringExecution
and preferredDuringSchedulingIgnoredDuringExecution
. You can think of them as "hard" and "soft" respectively, in the sense that the former specifies rules that must be met for a pod to be scheduled onto a node (just like nodeSelector
but using a more expressive syntax), while the latter specifies preferences that the scheduler will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar to how nodeSelector
works, if labels on a node change at runtime such that the affinity rules on a pod are no longer met, the pod will still continue to run on the node.
Example: pod can only be placed on a node with a label whose key is env=test or env=stag
Step1: Check the labels on all the nodes
kubectl get nodes --show-labels
Step2: Check the label on a specific node ( say kwn2)
kubectl get nodes --show-labels kwn2
Step3: Create a label env=test for a worker node ( say kwn2)
kubectl label nodes kwn2 env=test --overwrite
Step4: Create a pod which can be placed on nodes which has labels env=test or stag. Create a file called nodeaffinity.yaml
# nodeaffinity.yaml
apiVersion: v1
kind: Pod
metadata:
name: podaffinity
spec:
containers:
- name: nginx-container
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: env
operator: In
values:
- test
- stag
Step5: Create the pod by running below command
kubectl create -f nodeaffinity.yaml
Step6: Verify the pod “podaffinity” is created on kwn2 by running below command
kubectl get pods -o wide
Reference link
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
Taints and Tolerations
Node affinity, is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite -- they allow a node to repel a set of pods.
Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints
- if there is at least one un-ignored taint with effect NoSchedule then Kubernetes will not schedule the pod onto that node
- if there is no un-ignored taint with effect NoSchedule but there is at least one un-ignored taint with effect PreferNoSchedule then Kubernetes will try to not schedule the pod onto the node
- if there is at least one un-ignored taint with effect NoExecute then the pod will be evicted from the node (if it is already running on the node), and will not be scheduled onto the node (if it is not yet running on the node).
Example Create a pod which has the toleration on worker node which is tainted for app=web only.
Step1: Taint a node with key app and value web with NoSchedule effect
kubectl taint nodes kwn1 app=web:NoSchedule
Step2: Create a pod with toleration of web app. Create a file taint.yaml
#taint.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
tolerations:
- key: app
operator: "Equal"
value: "web"
effect: "NoSchedule"
Step3: create the pod by running below command
kubectl create -f taint.yaml
Step4: Verify the pod that where the pod get created ( not necessarily it will create on tainted node)
kubectl get pods -o wide
Removing a taint from a node
You can use kubectl taint
to remove taints. You can remove taints by key, key-value, or key-effect.
For example, the following command removes from node foo
all the taints with key dedicated
:
kubectl taint nodes foo dedicated-
Reference link
https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
Comments
Post a Comment