IntAttention: A Fully Integer Attention Pipeline for Efficient Edge Inference
Wanli Zhong ⋅ Haibo Feng ⋅ Zirui Zhou ⋅ Hanyang Peng ⋅ Shiqi Yu
Abstract
Deploying Transformer models on edge devices is limited by latency and energy budgets. While INT8 quantization effectively accelerates the primary matrix multiplications, it exposes the softmax as the dominant bottleneck. This stage incurs a costly $\mathrm{dequantize}\rightarrow\mathrm{softmax}\rightarrow\mathrm{requantize}$ detour, which can account for up to 65\% of total attention latency and disrupts the end-to-end integer dataflow critical for edge hardware efficiency. To address this limitation, we present \emph{IntAttention}, the first fully integer, plug-and-play attention pipeline without retraining. At the core of our approach lies \emph{IndexSoftmax}, a hardware-friendly operator that replaces floating-point exponentials entirely within the integer domain. \emph{IntAttention} integrates sparsity-aware clipping, a 32-entry lookup-table approximation, and direct integer normalization, thereby eliminating all datatype conversion overhead. We evaluate \emph{IntAttention} and demonstrate consistent and substantial gains. Our method achieves up to \textbf{3.7×} speedup and \textbf{61\%} energy reduction over FP16 baselines and \textbf{2.0x} faster than conventional INT8 attention pipelines on Armv8 CPUs. These gains are achieved with high-fidelity accuracy comparable to baselines across diverse language and vision models, enabling practical and efficient Transformer inference on commodity edge devices.
Chat is not available.
Successful Page Load