Skip to main content
30.04.2026

Kubernetes v1.36 Makes Memory QoS Practical

head-image

The official Kubernetes v1.36 Memory QoS announcement is worth a close read if you operate multi-tenant clusters. Memory tuning in Kubernetes has always been tricky because protecting important workloads often meant introducing new OOM risk elsewhere. This update makes that tradeoff much easier to manage.

What Changed in Kubernetes v1.36?

Memory QoS already used the Linux cgroup v2 memory controller, but earlier behavior could be too aggressive. When enabled, Kubernetes would set memory.min for containers with memory requests. That creates a hard reservation the kernel will not reclaim, even under pressure.

In v1.36, throttling and reservation are split. You can enable the MemoryQoS feature gate and keep memoryReservationPolicy: None, which means memory.high throttling still works without forcing hard reservation. If you want stronger protection, memoryReservationPolicy: TieredReservation maps Guaranteed pods to memory.min and Burstable pods to memory.low.

That is the key improvement. Guaranteed workloads keep strong protection, while Burstable workloads get softer protection that the kernel can still reclaim during extreme pressure.

Why SRE Teams Should Care

This change lowers the chance of turning memory requests into accidental node starvation. In older behavior, a node with many Burstable pods could lock up too much memory as hard reservation. In the v1.36 model, only Guaranteed pods consume hard-protected memory.

The release also adds kubelet metrics that make this visible:

  • kubelet_memory_qos_node_memory_min_bytes
  • kubelet_memory_qos_node_memory_low_bytes

Those metrics give operators a much clearer signal for capacity planning. If hard reservation starts creeping too high, you can see it before the node tips into OOM trouble.

Installation

You need Kubernetes v1.36+, cgroup v2, and a runtime that supports it, such as containerd 1.6+ or CRI-O 1.22+.

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
featureGates:
  MemoryQoS: true
memoryReservationPolicy: TieredReservation
memoryThrottlingFactor: 0.9

If you want the safer first step, keep reservation disabled and only turn on throttling:

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
featureGates:
  MemoryQoS: true
memoryReservationPolicy: None

Operational Tips

Roll this out in stages. Start on non-critical node pools, watch the new kubelet metrics, and compare memory pressure events before and after. Also verify your kernel is 5.9+ because older kernels can hit a known memory.high livelock issue.

For platform teams, this is a solid upgrade because it improves fairness under pressure without forcing an all-or-nothing reservation model.

Conclusion

Kubernetes v1.36 does not just add another tuning knob. It makes Memory QoS practical enough for real production experimentation. If you want better protection for important workloads without overcommitting node memory, this release is a meaningful step forward.

If you are building reliable, AI-assisted operations, Akmatori helps teams automate infrastructure workflows and incident response. Backed by Gcore, we are building tools for modern SRE and platform teams.

Automate incident response and prevent on-call burnout with AI-driven agents!