Back to Blog

KV Cache Optimization

5 min read

During autoregressive decoding, each new token attends to all previous tokens. Without caching, you'd recompute keys and values for the entire prefix at every step—step 1 costs O(1)O(1), step 2 costs O(2)O(2), and so on, summing to O(n2)O(n^2) total for a sequence of length nn.

The fix is obvious: cache the keys and values. Once you've computed kt=htWKk_t = h_t W_K and vt=htWVv_t = h_t W_V for a token, store them. Future steps just append to the cache and attend over it. You still pay O(n2)O(n^2) total (you're attending to more tokens each step), but you avoid recomputing everything from scratch.

The catch is memory. For Llama 3 70B with 80 layers, 64 heads, dk=128d_k = 128, a batch of 8 sequences at length 1000 in fp16 needs about 21GB just for the KV cache. This scales linearly with sequence length and batch size, and quickly becomes the bottleneck.

So there's been a lot of work on making KV caches smaller. I'll cover 11 papers here, roughly organized into three approaches: pruning tokens from the cache, compressing the cache post-hoc, and redesigning the attention mechanism itself.


Token selection

The idea here is simple: not all cached tokens matter equally. If you can identify which ones are important, you can evict the rest and keep a bounded cache.

H2O observes that attention scores follow a power-law—a small set of "heavy-hitter" tokens get most of the attention mass. They frame eviction as submodular maximization over accumulated attention scores:

St=argmaxSSt1{xt},SkiSAiS_t = \text{argmax}_{S \subseteq S_{t-1} \cup \{x_t\}, |S| \leq k} \sum_{i \in S} A_i

This gets 5× cache reduction with minimal quality loss.

StreamingLLM tackles infinite-length sequences. The key observation is that initial tokens act as "attention sinks"—they accumulate attention regardless of content, and removing them breaks the model even with sliding window attention. The fix is straightforward: keep a few initial sink tokens plus a rolling window of recent tokens. This enables constant-memory generation over arbitrarily long sequences.

VATP refines the importance metric. H2O only looks at attention scores, but what matters for the output is attention × value. So they score tokens by Ikt=Sktvk1I_k^t = S_k^t \cdot \|v_k\|_1, combining accumulated attention with value magnitude. This beats H2O on most LongBench tasks at 50% compression.

MethodCache ReductionOverheadKey Insight
H2OLowHeavy-hitters follow power-law
StreamingLLMFixed budgetMinimalInitial tokens are attention sinks
VATPLowAttention × value norm

Post-hoc compression

These methods compress existing caches without changing the architecture. Useful when you can't retrain.

FastGen notices that different attention heads have different patterns—some are local, some global, some focus on special tokens. During prefill, it profiles each head and picks a compression policy (keep special tokens, sliding window, top-k by attention, or combinations). No training required, just runtime adaptation.

DMC learns to merge adjacent cache entries. At each step, the model predicts whether to merge with the previous entry (αt\alpha_t) and with what weight (ωt\omega_t), both derived from the first dimension of k and q. When merging:

k=ωtkt+zt1kt1ωt+zt1,v=ωtvt+zt1vt1ωt+zt1k' = \frac{\omega_t k_t + z_{t-1} k_{t-1}}{\omega_t + z_{t-1}}, \quad v' = \frac{\omega_t v_t + z_{t-1} v_{t-1}}{\omega_t + z_{t-1}}

Training uses Gumbel-Sigmoid relaxation. Gets up to 8× compression but requires fine-tuning.

L2L_2 Norm-Based Compression is the simplest approach here. They observe that keys with low L2L_2 norm tend to get high attention (softmax pushes probability toward smaller dot products when queries are normalized). So just keep the keys with lowest norms. No statistics tracking, works with FlashAttention.

MethodCache ReductionTraining RequiredKey Insight
FastGenConfigurableNoPer-head attention profiling
DMCUp to 8×YesLearned merging
L2 NormConfigurableNoLow norm ↔ high attention

Architectural changes

These require training from scratch (or significant fine-tuning) but can get bigger wins.

MQA is the classic: all query heads share a single key-value head. You get H×H\times cache reduction but lose expressiveness—quality drops on tasks needing diverse attention patterns.

GQA interpolates: group query heads and have each group share a KV head. With GG groups you get H/GH/G reduction. The nice thing is you can convert existing MHA models by mean-pooling KV heads within groups, then "uptrain" for ~5% of original compute.

MLA (from DeepSeek-V2) takes a different approach. Instead of sharing heads, compress keys and values into a low-rank latent space:

cKV,t=WDKVht,kC=WUKcKV,t,vC=WUVcKV,tc_{\text{KV}, t} = W_{\text{DKV}} h_t, \quad k_C = W_{\text{UK}} c_{\text{KV}, t}, \quad v_C = W_{\text{UV}} c_{\text{KV}, t}

You cache only cKVc_{\text{KV}} (plus a small RoPE component for position), then reconstruct per-head keys and values during attention. This preserves per-head expressiveness while compressing storage.

SnapKV targets prompt compression. During prefill, it looks at attention patterns in an "observation window" at the end of the prompt to identify which earlier positions matter, then keeps only those. Integrates with existing models without retraining.

YOCO rethinks the whole architecture. Standard transformers have KV caches scaling as O(N×L)O(N \times L)—sequence length times layers. YOCO uses a decoder-decoder design where a "self-decoder" with efficient attention (sliding window or gated retention) produces a single global KV cache that a "cross-decoder" attends to. This drops memory to O(N+L)O(N + L).

MethodCache ReductionQuality Trade-offKey Insight
MQANotableSingle shared KV head
GQAH/G×MinimalGrouped sharing
MLA~5×PreservedLatent compression
SnapKVConfigurableMinimalObservation window
YOCOArch-dependentSingle global cache

In practice, production systems often combine approaches—GQA at the architecture level, plus something like H2O or SnapKV at runtime for additional savings.