Code: https://github.com/hao-ai-lab/LookaheadDecoding
Blog post: https://lmsys.org/blog/2023-11-21-lookahead-decoding/
Description:
We introduce lookahead decoding, a new, exact, and parallel decoding algorithm to accelerate LLM inference. Lookahead decoding breaks the sequential dependency in autoregressive decoding by concurrently extracting and verifying n-grams directly with the LLM, utilizing the Jacobi iteration method. Lookahead decoding functions without the need for a draft model or a data store. It linearly decreases the number of decoding steps directly correlating with the log(FLOPs) used per decoding step. Below is a demo of lookahead decoding accelerating LLaMa-2-Chat 7B generation:
https://i.redd.it/k61qtr4zz22c1.gif
You must log in or register to comment.