Hello,
With the new GPT4 and its 128k context window, my question is, is there an advantage to running a local LLM?
-
Cost Considerations: Is there a cost benefit to running a local LLM compared to OpenAI?
-
Specialized Training: Is it possible to train the model for specific tasks, similar to ‘Code Llama - Python’, but perhaps for areas like ‘Code Llama - Unreal Engine’?
I understand that for some applications, avoiding the content restrictions of OpenAI might be a plus. However, when it comes to using an LLM as a coding assistant, are there any specific advantages to running it locally?
Passing it proprietary code is probably on the top of the list.