HPCA 2026
Sat 31 January - Wed 4 February 2026 Sydney, Australia
co-located with HPCA/CGO/PPoPP/CC 2026
Mon 2 Feb 2026 16:30 - 16:50 at Coogee - Efficient LLM Inference Techniques Chair(s): Jovan Stojkovic

The rise of long-context Large Language Models (LLMs) amplifies memory and bandwidth demands during autoregressive decoding, as the Key–Value (KV) cache grows with each generated token. Low-bit KV-cache quantization (e.g., 4-bit or 2-bit) can reduce memory footprint while preserving accuracy, but existing systems suffer from slow decoding due to their exclusive reliance on CUDA cores, neglecting Tensor Cores—the primary source of compute on modern GPUs.

We present BitDecoding, a new long-context LLMs inference system with low-bit KV cache. BitDecoding enables efficient low-bit KV cache decoding by cooperatively leveraging CUDA Cores and Tensor Cores. It introduces methods for automatically inducing optimized layouts to exploit Tensor Cores, along with novel warp-level parallelization strategies for dequantization. For unified system support, BitDecoding includes a query transformation module supporting diverse attention variants, a quantization kernel to support both tensor-wise and channel-wise scaling used in various quantization algorithms with high performance, and a dequantization kernel with a software-defined pipeline to coordinate CUDA and Tensor Cores execution for mix-precision operations.

Evaluated on RTX 4090, A100, and H100, BitDecoding accelerates decoding by up to 7.5$\times$, 4.8$\times$, and 8.9$\times$, respectively, over FP16 FlashDecoding-v2, and surpasses the state-of-the-art low-bit system Qserve by up to 4.3$\times$. On LLaMA-3.1-8B with a 128K context, BitDecoding reduces single-batch decoding latency by 3$\times$, showing substantial improvements for long-context generation.

Mon 2 Feb

Displayed time zone: Hobart change

15:50 - 17:10
Efficient LLM Inference TechniquesMain Conference at Coogee
Chair(s): Jovan Stojkovic University of Illinois at Urbana-Champaign
15:50
20m
Talk
PADE: A Predictor-Free Sparse Attention Accelerator via Unified Execution and Stage Fusion
Main Conference
Huizheng Wang Tsinghua University, Hongbin Wang Tsinghua University, Zichuan Wang Tsinghua University, Zhiheng Yue Tsinghua University, Yang Wang Tsinghua University, Chao Li Shanghai Jiao Tong University, Yang Hu Tsinghua University, Shouyi Yin Tsinghua University
16:10
20m
Talk
AQPIM: Breaking the PIM Capacity Wall for LLMs with In-Memory Activation Quantization
Main Conference
Kosuke Matsushima Institute of Science Tokyo, Yasuyuki Okoshi Institute of Science Tokyo, Masato Motomura Institute of Science Tokyo, Daichi Fujiki Institute of Science Tokyo
16:30
20m
Talk
BitDecoding: Unlocking Tensor Cores for Long-Context LLMs with Low-Bit KV Cache
Main Conference
Dayou Du University of Edinburgh, Shijie Cao Microsoft Research, Jianyi Cheng University of Edinburgh, UK, Luo Mai University of Edinburgh, Ting Cao Institute for AI Industry Research (AIR), Tsinghua University, Mao Yang Microsoft Research
16:50
20m
Talk
GyRot: Leveraging Hidden Synergy between Rotation and Fine-grained Group Quantization for Low-bit LLM Inference
Main Conference