HPCA 2026
Sat 31 January - Wed 4 February 2026 Sydney, Australia
co-located with HPCA/CGO/PPoPP/CC 2026

This program is tentative and subject to change.

Mon 2 Feb 2026 10:30 - 10:50 at Coogee - Best Paper Candidates

Large language model (LLM) inference performance is increasingly bottlenecked by the memory wall. While GPUs continue to scale raw compute throughput, they struggle to deliver scalable performance for memory bandwidth bound workloads. This challenge is amplified by emerging reasoning LLM applications, where long output sequences, low arithmetic intensity, and tight latency constraints demand significantly higher memory bandwidth. As a result, system utilization drops and energy per inference rises, highlighting the need for an optimized system architecture for scalable memory bandwidth.

To address these challenges we present the Reasoning Processing Unit (RPU), a chiplet-based architecture designed to address the challenges of the modern memory wall. RPU introduces: (1) A Capacity-Optimized High-Bandwidth Memory (HBM-CO) that trades capacity for lower energy and cost; (2) a scalable chiplet architecture featuring a bandwidth-first power and area provisioning design; and (3) a decoupled microarchitecture that separates memory, compute, and communication pipelines to sustain high bandwidth utilization. Simulation results show that RPU performs up to 45.3× lower latency and 18.6× higher throughput over an H100 system at ISO-TDP on Llama3-405B.

This program is tentative and subject to change.

Mon 2 Feb

Displayed time zone: Hobart change

09:50 - 11:10
Best Paper CandidatesMain Conference at Coogee
09:50
20m
Talk
Focus: A Streaming Concentration Architecture for Efficient Vision-Language Models
Main Conference
Chiyue Wei Duke University, Cong Guo Duke University, Junyao Zhang Duke University, Haoxuan Shan Duke University, Yifan Xu Duke University, Ziyue Zhang Duke University, Yudong Liu Duke University, Qinsi Wang Duke University, Changchun Zhou Duke University, Hai "Helen" Li Duke University, Yiran Chen Duke University
10:10
20m
Talk
LoCaLUT: Harnessing Capacity–Computation Tradeoffs for LUT-Based Inference in DRAM-PIM
Main Conference
Junguk Hong Seoul National University, Changmin Shin Seoul National University, Sukjin Kim Seoul National University, Si Ung Noh Seoul National University, Taehee Kwon Seoul National University, Seongyeon Park Seoul National University, Hanjun Kim Yonsei University, Youngsok Kim Yonsei University, Jinho Lee Seoul National University
10:30
20m
Talk
RPU - A Reasoning Processing Unit
Main Conference
Matthew Adiletta Harvard University, David Brooks Harvard University, Gu-Yeon Wei Harvard University
10:50
20m
Talk
PinDrop: Breaking the Silence on SDCs in a Large-Scale Fleet
Main Conference
Peter W. Deutsch Massachusetts Institute of Technology/Meta, Harish D. Dixit Meta, Gautham Vunnam Meta, Carl Moran Meta, Eleanor Ozer Meta, Sriram Sankar Meta