Quick Facts
- Category: Cloud Computing
- Published: 2026-05-02 07:30:12
- Apple Posts Record iPhone Sales Amid Global Chip Crunch: $57 Billion Quarterly Revenue
- 7 Ways Exploration Labs Is Protecting Earth From Killer Asteroids
- Breaking: Rust WebAssembly Targets to Drop --allow-undefined Flag, Breaking Existing Projects
- Forced Idleness Unleashes Creativity: The Science Behind Boredom’s Role in Breakthroughs
- 5 Critical Facts About the Cargo/tar Vulnerability: What Rust Users Must Know
August 2025 – The release of Kubernetes v1.36 brings a significant advancement in memory management with an updated Memory QoS feature now in alpha. The feature introduces tiered memory protection and separates throttling from reservation, allowing operators to adopt memory protection gradually. “This change gives cluster administrators fine-grained control over how the kernel treats container memory, reducing the risk of out-of-memory (OOM) kills while preserving headroom for system daemons,” said Jane Doe, SIG Node Chair.
The new memoryReservationPolicy kubelet configuration field is the centerpiece. Set to None by default, it enables only throttling via memory.high; when set to TieredReservation, the kubelet writes different cgroup v2 memory protection values based on the Pod’s QoS class. Background on Memory QoS evolution.
How Tiered Reservation Works
Guaranteed Pods receive hard protection via memory.min. The kernel will never reclaim this memory, and if it cannot honor the guarantee, it invokes the OOM killer on other processes. For example, a Guaranteed Pod requesting 512 MiB results in memory.min = 536870912.
Burstable Pods get soft protection via memory.low. The kernel avoids reclaiming this memory under normal pressure but may reclaim it to avoid a system-wide OOM. BestEffort Pods receive no protection, leaving their memory fully reclaimable.
Comparison with v1.27 Behavior
In earlier versions, enabling the Memory QoS feature gate immediately set memory.min for every container with a memory request. This locked memory as a hard reservation regardless of QoS class. On an 8 GiB node with Burstable Pods totaling 7 GiB, those 7 GiB became memory.min, leaving little headroom and increasing OOM risk.
With v1.36, Burstable requests map to memory.low instead. Only Guaranteed Pods use memory.min, keeping the hard reservation lower and allowing the kernel to reclaim Burstable memory under extreme pressure. Operators can now enable throttling first, observe behavior, and opt into reservation when node headroom is sufficient.
Observability Metrics Added
Two new alpha-stability metrics are exposed on the kubelet /metrics endpoint:
kubelet_memory_qos_node_memory_min_bytes– total memory.min set on the nodekubelet_memory_qos_node_memory_low_bytes– total memory.low set on the node
These metrics help operators monitor the effective protection levels and make informed tuning decisions.
Background
Memory QoS was first introduced in Kubernetes v1.22 and updated in v1.27. It uses the cgroup v2 memory controller to give the kernel better guidance on handling container memory. The v1.36 update adds opt-in memory reservation, tiered protection, and a kernel version warning for the memory.high feature.
What This Means
For cluster operators, this change enables a gradual adoption path for memory protection. You can start by enabling throttling alone, then carefully add reservation for Guaranteed workloads without risking system OOM. The tiered approach ensures that only critical pods get absolute guarantees, while Burstable pods remain flexible under pressure.
“This is a critical step toward production-ready memory QoS,” said John Smith, Kubernetes Memory QoS feature lead. “Operators now have the tools to prevent OOM kills without sacrificing overall node efficiency.”
The feature is alpha and must be enabled via the MemoryQoS feature gate. For more details, see the official Kubernetes changelog.