Feedback

Chat Icon

Cloud-Native Microservices With Kubernetes - 2nd Edition

A Comprehensive Guide to Building, Scaling, Deploying, Observing, and Managing Highly-Available Microservices in Kubernetes

Understanding Resource Management in Kubernetes
40%

Requests and Limits

If a node has spare resources, a container can temporarily use more than its request defines. But it can never exceed its limit — once it does, Kubernetes may stop it to protect the node.

Kubernetes relies on requests and limits when deciding where to place Pods. The scheduler adds up all the resource requests for every container and fits them onto nodes with enough capacity. For instance, a Pod that requests 100m CPU and 200Mi memory won't be scheduled on a node that lacks that much available space.

Let's see an example. Start by exporting your Docker Hub username:

export DOCKERHUB_USERNAME=
# You can use "eon01" to pull my image.

Then, create a Namespace called resource-management and deploy a simple stateless Flask application with specific resource requests and limits:

kubectl apply -f - << EOF
# Create Namespace
apiVersion: v1
kind: Namespace
metadata:
  name: resource-management
# Create Deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: stateless-flask
  namespace: resource-management
spec:
  replicas: 3
  selector:
    matchLabels:
      app

Cloud-Native Microservices With Kubernetes - 2nd Edition

A Comprehensive Guide to Building, Scaling, Deploying, Observing, and Managing Highly-Available Microservices in Kubernetes

Enroll now to unlock all content and receive all future updates for free.