Kubernetes v1.36 Adds Pod-Level Resource Managers

The official Kubernetes v1.36 pod-level resource managers announcement highlights a long-standing problem for operators: modern pods often mix one performance-critical container with a few lightweight helpers. Before this release, getting NUMA-aligned exclusive resources for the main workload often meant over-allocating those same resources to logging, metrics, or service mesh sidecars.
What Is New in v1.36?
Kubernetes v1.36 extends the kubelet's Topology Manager, CPU Manager, and Memory Manager so they can understand pod-level resource budgets in .spec.resources. That creates a hybrid model.
A pod can declare an overall CPU and memory budget, while the most demanding container still gets exclusive slices. The remaining sidecars can share the rest of the pod budget instead of forcing every container into the same guaranteed allocation pattern.
According to the Kubernetes blog, this is especially useful for:
- low-latency databases
- machine learning training workloads
- pods with logging, backup, metrics, or service mesh sidecars
Why SRE Teams Should Care
This feature is about efficiency and predictability. Operators no longer have to choose between clean NUMA alignment and reasonable sidecar sizing. That matters when you want stable performance under load but still need operational helpers in the same pod.
It also improves visibility. Kubernetes adds new kubelet metrics such as resource_manager_allocations_total, resource_manager_allocation_errors_total, and resource_manager_container_assignments. Those metrics make it easier to see whether workloads land in exclusive or shared pools and whether allocation failures are starting to appear on a node.
Installation
This feature is alpha in Kubernetes v1.36, so you need to enable it explicitly in the kubelet. The official guidance requires both pod-level feature gates plus non-default resource manager policies:
featureGates:
PodLevelResources: true
PodLevelResourceManagers: true
topologyManagerPolicy: single-numa-node
topologyManagerScope: pod
cpuManagerPolicy: static
memoryManagerPolicy: Static
You need Kubernetes v1.36 or newer, and the Topology Manager policy cannot be none.
Usage
At the pod level, you define the total budget first. Then you reserve a smaller guaranteed slice for the main container:
apiVersion: v1
kind: Pod
metadata:
name: tightly-coupled-database
spec:
resources:
requests:
cpu: "8"
memory: "16Gi"
limits:
cpu: "8"
memory: "16Gi"
containers:
- name: database
image: database:v1
resources:
requests:
cpu: "6"
memory: "12Gi"
limits:
cpu: "6"
memory: "12Gi"
In this model, the database gets exclusive resources while sidecars can share the remaining pod budget.
Operational Tips
Test this first on nodes dedicated to performance-sensitive workloads. Watch kubelet metrics before and after enabling the feature, and confirm your CPU Manager, Memory Manager, and Topology Manager settings are already aligned with your intended NUMA policy.
Because this is still alpha, treat it as a targeted optimization, not a cluster-wide default.
Conclusion
Pod-level resource managers give Kubernetes operators a more practical way to run sidecar-heavy high-performance workloads. You get tighter resource control, better efficiency, and clearer observability without padding every container to match the main application.
If you are building reliable, AI-assisted operations, Akmatori helps teams automate infrastructure workflows and incident response. Backed by Gcore, we are building tools for modern SRE and platform teams.
