Feedback

Chat Icon

Cloud-Native Microservices With Kubernetes - 2nd Edition

A Comprehensive Guide to Building, Scaling, Deploying, Observing, and Managing Highly-Available Microservices in Kubernetes

Microservices Deployment Strategies: Custom Scheduling
59%

Taints and Tolerations: Theory & Practice

Taints and Tolerations typically work together when you want to make sure that Pods are not scheduled on inappropriate Nodes. Taints are applied to Nodes, while Tolerations are applied to Pods.

When you add a Taint to a Node, Kubernetes repels all Pods from it except those that have a matching Toleration.

Each Taint has three components:

  • A key, which is a name used to identify the Taint.
  • A value, which is an optional value used to identify the Taint.
  • An effect, which is the action that Kubernetes will take on any Pods that do not tolerate the Taint.

The effect can be one of the following:

  • NoSchedule: Kubernetes will not schedule any Pods that do not tolerate the Taint.
  • PreferNoSchedule: Kubernetes will try not to schedule any Pods that do not tolerate the Taint, but this is not guaranteed.
  • NoExecute: Kubernetes will evict any existing Pods that do not tolerate the Taint.

You can view the Taints of your nodes using the following command:

kubectl get nodes \
-o=custom-columns=\
NodeName:.metadata.name,\
TaintKey:.spec.taints[*].key,\
TaintValue:.spec.taints[*].value,\
TaintEffect:.spec.taints[*].effect

Alternatively, you can create an alias for the above command.

# Add the alias to your bashrc file
cat <> ~/.bashrc
alias taints='kubectl get nodes \
-o=custom-columns=\
NodeName:.metadata.name,\
TaintKey:.spec.taints[*].key,\
TaintValue:.spec.taints[*].value,\
TaintEffect:.spec.taints[*].effect'
EOF

# Next, reload the bashrc file and execute the alias.
source ~/.bashrc

# Test the alias
taints

Some common Taints are those automatically added by Kubernetes (or the cloud provider) to nodes under specific conditions. Here is a list of some of them:

  • node.kubernetes.io/not-ready: Unready nodes.
  • node.kubernetes.io/unreachable: Unreachable nodes.
  • node.kubernetes.io/memory-pressure: Nodes under memory pressure.
  • node.kubernetes.io/disk-pressure: Nodes under disk pressure.
  • node.kubernetes.io/pid-pressure: Nodes under PID pressure.
  • node.kubernetes.io/network-unavailable: Nodes with network issues.
  • node.kubernetes.io/unschedulable: Unschedulable nodes.
  • node.cloudprovider.kubernetes.io/uninitialized: Uninitialized nodes (used by cloud providers).

Users and cluster administrators can also add custom taints. For example, to taint a node with a custom taint:

# Get a random node name
export nodename=$(kubectl get nodes -o custom-columns=NAME:.metadata.name --no-headers | head -n 1)

# Add a custom Taint to the node
kubectl taint nodes $nodename \
  mykey1=myvalue1:NoSchedule

# Add another custom Taint to the node
kubectl taint nodes $nodename \
  mykey2=myvalue2:NoExecute

# Add another custom Taint to the node
kubectl taint nodes $nodename \
  mykey3=myvalue3:PreferNoSchedule

# Check the Taints of your nodes again
taints

The syntax of the kubectl taint command is as follows:

kubectl taint nodes  =:

where:

  • : The name of the node to which you want to add the taint.
  • : The key of the taint.
  • : The value of the taint.
  • : The effect of the taint (NoSchedule, PreferNoSchedule, or NoExecute). We will see these effects in action in the next section.

Tainting All Nodes

Instead of tainting a single node, you can taint all nodes in your cluster using the --all flag. For example:

kubectl taint nodes \
  --all \
  mykey=myvalue:NoSchedule

# Check the Taints of your nodes again
taints

Removing a Taint

To remove a taint from a node, use the following command:

kubectl taint nodes  -

Let's remove all the taints we created earlier:

# You may see error messages if some taints were not applied to all nodes
# but don't worry about them
kubectl taint nodes --all mykey1-
kubectl taint nodes --all mykey2-
kubectl taint nodes --all mykey3-
kubectl taint nodes --all mykey-

# Check the Taints of your nodes again
taints

Taints and Tolerations: A Practical Example

Let's see a practical example of how Taints and Tolerations work together. In this example, we are going to use a DaemonSet that deploys Fluentd (a popular log collector) on every node in the cluster. We use the image quay.io/fluentd_elasticsearch/fluentd:v4, which is pre-configured to collect logs and send them to an Elasticsearch instance. The features of Fluentd are not important for this example; we are only interested in seeing how Kubernetes schedules the DaemonSet Pods on tainted nodes.

Remember to remove any existing taints from your nodes before starting this example (if you followed the previous section, you already did that).

Start by applying the following DaemonSet manifest:

kubectl apply -f - <
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v4
        volumeMounts:
        - name: varlog
          mountPath: /var/log
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
EOF

To check where the DaemonSet Pods are running (on which nodes), use the following command:

kubectl \
-n kube-system \
get pods \
-o=custom-columns=\
PodName:.metadata.name,\
NodeName:.spec.nodeName | \
grep fluentd-elasticsearch

Cloud-Native Microservices With Kubernetes - 2nd Edition

A Comprehensive Guide to Building, Scaling, Deploying, Observing, and Managing Highly-Available Microservices in Kubernetes

Enroll now to unlock all content and receive all future updates for free.