I wanted to share some exciting news from the GPU world that could potentially change the game for LLM inference. AMD has been making significant strides in LLM inference, thanks to the porting of vLLM to ROCm 5.6. You can find the code implementation on GitHub.

The result? AMD’s MI210 now almost matches Nvidia’s A100 in LLM inference performance. This is a significant development, as it could make AMD a more viable option for LLM inference tasks, which traditionally have been dominated by Nvidia.

For those interested in the technical details, I recommend checking out this EmbeddedLLM Blog Post.

I’m curious to hear your thoughts on this. Anyone manage to run it on RX 7900 XTX?

https://preview.redd.it/rn7n29yxpuwb1.png?width=600&format=png&auto=webp&s=bdbac0d2b34d6f43a03503bbf72b446190248789