Skip to main content
06.05.2026

Kubernetes v1.36 Makes Pod Resizing Practical

head-image

The official Kubernetes v1.36 in-place pod-level resources announcement highlights a useful operational win: changing the aggregate resource envelope of a running pod. If you run application containers plus log shippers, proxies, or metrics sidecars, this makes resource tuning much less awkward.

What Is New in v1.36?

Kubernetes already supported pod-level resources and in-place pod vertical scaling in earlier releases. In v1.36, those capabilities come together. You can now update .spec.resources on a running pod and let containers that inherit limits from the pod-level budget scale with it.

That matters most for shared-pool pods where only the total pod budget is defined. Instead of recalculating and patching each container limit, operators can resize the pod-level boundary and let kubelet coordinate the update.

According to the Kubernetes announcement, the feature is now beta and enabled by default through InPlacePodLevelResourcesVerticalScaling.

Why SRE Teams Should Care

This is a practical feature for noisy production workloads. If a service hits a temporary CPU ceiling, you can expand the shared pool without forcing a full pod replacement. That reduces disruption during incident mitigation and gives platform teams a cleaner path for controlled resource changes.

It also improves status visibility. Kubernetes reports PodResizePending when the new size cannot be admitted yet and PodResizeInProgress when the node has accepted the resize but kubelet is still applying it. That is much easier to reason about during debugging than guessing whether a patch actually took effect.

Installation

This feature is beta in Kubernetes v1.36 and enabled by default, but there are still platform requirements. The official guidance calls out:

  • Linux nodes only
  • cgroup v2
  • a CRI implementation that supports UpdateContainerResources, such as containerd v2.0+ or CRI-O
  • feature gates including PodLevelResources, InPlacePodVerticalScaling, InPlacePodLevelResourcesVerticalScaling, and NodeDeclaredFeatures

If you operate mixed node pools, verify runtime and cgroup compatibility before you rely on this behavior in production.

Usage

A simple resize updates the pod-level resource budget through the resize subresource:

kubectl patch pod shared-pool-app --subresource resize --patch \
  '{"spec":{"resources":{"limits":{"cpu":"4"}}}}'

In the example from the Kubernetes blog, a pod starts with a shared 2 CPU limit and is later expanded to 4 CPUs. Containers with resizePolicy set to NotRequired can accept the update without a restart.

Operational Tips

Use this first on workloads where containers inherit from a pod-level limit instead of carrying tightly tuned per-container values. Watch pod conditions after each resize and confirm your nodes actually have spare allocatable capacity. If the node is too full, Kubernetes marks the resize as deferred or infeasible instead of silently ignoring it.

For capacity-sensitive clusters, this feature pairs nicely with automation that reacts to incident pressure but still keeps changes explicit and observable.

Conclusion

Kubernetes v1.36 makes pod-level resource resizing much more usable for real operations. SRE teams get faster mitigation options, fewer disruptive restarts, and better visibility into whether a resize is pending, in progress, or blocked.

If you are building reliable, AI-assisted operations, Akmatori helps teams automate infrastructure workflows and incident response. Backed by Gcore, we are building tools for modern SRE and platform teams.

Automate incident response and prevent on-call burnout with AI-driven agents!