GhostServe: A Lightweight Checkpointing System in the Shadow for Fault-Tolerant LLM Serving
Shakya Jayakody ⋅ Youpeng Zhao ⋅ Chinmay Dhanraj Nehate ⋅ Jun Wang
Abstract
The rise of million-token, agent-based applications has placed unprecedented demands on large language model (LLM) inference services. The long-running nature of these tasks increases their susceptibility to hardware and software faults, leading to costly job failures, wasted resources, and degraded user experience. The stateful key-value (KV) cache, which grows with the sequence length, presents a central challenge as it is a critical and vulnerable component in distributed serving systems. In this work, we propose \textbf{GhostServe}, a novel checkpointing solution to facilitate fault-tolerant LLM serving. Specifically, GhostServe protects the streaming KV cache \textit{in the shadow} by applying erasure coding to generate and store the parity shards in host memory. In the event of device failures, GhostServe enables fast reconstruction of the lost KV cache, allowing the inference process to resume seamlessly without costly full recomputation or state replication. Evaluations demonstrate that GhostServe reduces checkpointing latency by up to 2.7$\times$ and recovery latency by 2.1$\times$ over existing methods, paving the way for reliable and high-availability LLM serving at scale.
Chat is not available.
Successful Page Load