Skip to main content
08.05.2026

Kubernetes Manifest-Based Admission Control

head-image

Quick Reference:

apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
  - name: ValidatingAdmissionPolicy
    configuration:
      apiVersion: apiserver.config.k8s.io/v1
      kind: ValidatingAdmissionPolicyConfiguration
      staticManifestsDir: "/etc/kubernetes/admission/validating-policies/"

Kubernetes has supported API-based admission policies for a while, but they always had one awkward weakness: they only exist after the API server is up and the policy objects have been created. The new v1.36 alpha feature, covered in the official Kubernetes announcement and tracked by the Kubernetes project, gives platform teams a cleaner option.

What Is Manifest-Based Admission Control?

Manifest-based admission control lets kube-apiserver load admission policies and webhooks from files on disk instead of only from Kubernetes API objects. Those files are activated at API server startup, which means policy enforcement is available before regular API writes begin.

This directly solves three operational problems called out in the official documentation:

  • Bootstrap gaps before API-based policies are created
  • Self-protection gaps where admins can delete or modify critical admission objects
  • Dependence on etcd for loading policy configuration

For SRE and platform teams, that means baseline controls can stay enforced even during rough cluster states.

Why Operators Should Care

The big win is durability. If your security posture depends on ValidatingAdmissionPolicy or admission webhooks created through the API, a privileged user can still remove them. Kubernetes avoids circular dependencies by not letting those resources fully guard themselves.

Manifest-based policies break that loop. Because the source of truth is a file on disk, the policy can intercept changes to API-based admission resources without locking you out forever. If a rule is wrong, you fix the file and the API server reloads it.

That makes this feature especially useful for:

  • Managed multi-tenant clusters
  • Regulated environments with strict baseline policy requirements
  • Recovery workflows after etcd corruption or backup restore
  • Platform teams that want stronger guardrails around security controls

How To Enable It

In v1.36, the feature is alpha and disabled by default. You need to enable the ManifestBasedAdmissionControlConfig feature gate on kube-apiserver and point --admission-control-config-file at an AdmissionConfiguration that includes staticManifestsDir.

A minimal setup looks like this:

kube-apiserver \
  --feature-gates=ManifestBasedAdmissionControlConfig=true \
  --admission-control-config-file=/etc/kubernetes/admission/config.yaml

Then place policy YAML files in a dedicated directory such as /etc/kubernetes/admission/validating-policies/.

Important Restrictions

This feature is intentionally strict. A few details matter:

  • Manifest objects must end with the .static.k8s.io suffix
  • Policies cannot use paramKind or paramRef
  • Admission webhooks must use clientConfig.url, not a Service reference
  • Each plugin gets its own manifest directory
  • Invalid manifests can block API server startup

Those limits are a fair trade for startup safety and independence from cluster state.

A Strong First Use Case

One of the best examples from the Kubernetes docs is protecting admission resources themselves. You can define a manifest-based policy that blocks updates or deletes for API-based admission resources labeled as protected.

That means your baseline ValidatingAdmissionPolicy, MutatingAdmissionPolicy, or webhook configs can no longer be casually removed through the API. For platform teams, that is a meaningful upgrade in cluster hardening.

Conclusion

Manifest-based admission control is still early, but it is one of the more practical Kubernetes v1.36 features for SRE teams. It closes real policy enforcement gaps, reduces dependency on etcd during startup, and gives operators a stronger way to protect the controls that protect everything else.

If you run shared Kubernetes clusters, this feature is worth testing now so you are ready when it matures.

Looking to automate policy checks, incident response, and operational workflows across your infrastructure? Akmatori helps SRE teams run AI-powered automation with real operational context. Built on Gcore's global infrastructure, Akmatori is designed for modern platform engineering teams.

Automate incident response and prevent on-call burnout with AI-driven agents!