HPCA 2026
Sat 31 January - Wed 4 February 2026 Sydney, Australia
co-located with HPCA/CGO/PPoPP/CC 2026
Mon 2 Feb 2026 16:50 - 17:10 at Coogee - Efficient LLM Inference Techniques Chair(s): Jovan Stojkovic

Low-bit quantization is essential for efficient LLM inference, and both rotation and fine-grained group quantization have shown individual promise. However, their combination often leads to accuracy degradation or hardware overhead due to a mismatch between the global nature of rotation and the localized behavior of group scaling. We propose GyRot, a quantization framework and hardware accelerator that bridges this gap through algorithm–hardware co-design. GyRot introduces Coarse Rotation, Fine Grouping (CoRFiG) and Harmonic-Aligned Permutation (HAP) to enable cooperative integration of rotation and group quantization, enhancing quantizability while relaxing scaling factor precision. To further reduce hardware cost, we reformulate asymmetric quantization and introduce a zero-point rounding strategy that enables fully integer dequantization. Implemented on an INT4-based tensor PE architecture, GyRot achieves state-of-the-art 4-bit accuracy across LLaMA-family models, while delivering up to 3.4× speedup and 3.6× energy efficiency over baseline LLM accelerators. These results validate GyRot’s practical effectiveness for scalable and energy-efficient LLM deployment.

Mon 2 Feb

Displayed time zone: Hobart change

15:50 - 17:10
Efficient LLM Inference TechniquesMain Conference at Coogee
Chair(s): Jovan Stojkovic University of Illinois at Urbana-Champaign
15:50
20m
Talk
PADE: A Predictor-Free Sparse Attention Accelerator via Unified Execution and Stage Fusion
Main Conference
Huizheng Wang Tsinghua University, Hongbin Wang Tsinghua University, Zichuan Wang Tsinghua University, Zhiheng Yue Tsinghua University, Yang Wang Tsinghua University, Chao Li Shanghai Jiao Tong University, Yang Hu Tsinghua University, Shouyi Yin Tsinghua University
16:10
20m
Talk
AQPIM: Breaking the PIM Capacity Wall for LLMs with In-Memory Activation Quantization
Main Conference
Kosuke Matsushima Institute of Science Tokyo, Yasuyuki Okoshi Institute of Science Tokyo, Masato Motomura Institute of Science Tokyo, Daichi Fujiki Institute of Science Tokyo
16:30
20m
Talk
BitDecoding: Unlocking Tensor Cores for Long-Context LLMs with Low-Bit KV Cache
Main Conference
Dayou Du University of Edinburgh, Shijie Cao Microsoft Research, Jianyi Cheng University of Edinburgh, UK, Luo Mai University of Edinburgh, Ting Cao Institute for AI Industry Research (AIR), Tsinghua University, Mao Yang Microsoft Research
16:50
20m
Talk
GyRot: Leveraging Hidden Synergy between Rotation and Fine-grained Group Quantization for Low-bit LLM Inference
Main Conference