RagInfer: Efficient Retrieval-Augmented Generation Inference with Lookahead Retrieval
Chien-Yu Lin ⋅ Keisuke Kamahori ⋅ Yiyu Liu ⋅ Xiaoxiang Shi ⋅ Madhav Kashyap ⋅ Yile Gu ⋅ Rulin Shao ⋅ Zihao Ye ⋅ Kan Zhu ⋅ Rohan Kadekodi ⋅ Stephanie Wang ⋅ Arvind Krishnamurthy ⋅ Luis Ceze ⋅ Baris Kasikci
Abstract
Retrieval-augmented generation (RAG) extends large language models (LLMs) with external data sources to enhance factual correctness and domain coverage. Modern RAG pipelines rely on large datastores, creating a significant system challenge: achieving high throughput and low latency is difficult, especially when GPU memory is limited. To address these challenges, we propose RAGInfer, an efficient inference system that reduces latency and improves throughput with minimal GPU memory requirements. The core innovation of RAGInfer is \emph{lookahead retrieval}, a prefetching mechanism that predicts required data and transfers them from CPU to GPU in parallel with LLM generation. In addition, RAGInfer adopts a prefetching scheduler and a cache-aware scheduler to support efficient multi-GPU inference with minimal overhead. Evaluations show RAGInfer achieves up to a 1.53$\times$ average end-to-end latency reduction (single-query) and 1.83$\times$ higher average throughput (batched), as well as good scalability in throughput. This confirms the practical utility of RAGInfer for faster and more memory-efficient deployments of RAG applications.
Chat is not available.
Successful Page Load