I got tired of slow cpu inference as well as Text-Generation-WebUI that’s getting buggier and buggier.
Here’s a working example that offloads all the layers of zephyr-7b-beta.Q6_K.gguf to T4, a free GPU on Colab.
It’s pretty fast! I get 28t/s.
https://colab.research.google.com/gist/chigkim/385e8d398b40c7e61755e1a256aaae64/llama-cpp.ipynb
You must log in or register to comment.
how long does it stay alive/online?
how can we cache the model on the google drive so it doesnt need to download 5 gigs evertime it is re run?
in fact, what else could be cached
lastly how to get this to rerun after it dies so you get a fairly consistent uptime in leeching off it ;)