Do any of you apply ML or LLM models in your homelab? Anything related to AI?
With this being the new hype, I figured I should start applying it somehow to keep up with the ever changing landscape of tech.
Curious to see what others are doing in this regard.
I use easy diffusion in windows, it works acceptable. I tried a 7B LLM model based on llama on windows as well and it was terrible compared to cloud hosted gpt-3,4. I have a 1660 to to work with so I’m VRAM limited and GPU speed slowed. I have Jupyter with pytorch but I haven’t had a need to use it.
At work, I’ve trained models with yolo and use have recently started using gpt-4 for writing and coding starts.
The best use/start is to program some useful utilities in cuda, you don’t need a honking big dGPU for that but you need tensor cores.