- 2 Posts
- 4 Comments
nderstand2growBto LocalLLaMA@poweruser.forum•A fun day evaluating LLM Chat GUIs/Servers in Docker. Here's what I learned...English1·2 years agoI have ollama on my Mac (not Docker) and installed the ollama web UI. It works fine but their instruction on running ollama in a LAN network doesn’t work for me. The flags they mention to add the CLI command throw an error (esp. the
*
part).
nderstand2growBto LocalLLaMA@poweruser.forum•For roleplay purposes, Goliath-120b is absolutely thrilling meEnglish1·2 years agoYou could use M2 Ultra instead ($6500) vs. 2x$15,000+rest
I think this is only available on
llama.cpp
. I’ve been using it for a while for simple structured outputs and am extremely happy with the results. With OpenAI’s function calling, I always had to write validators – first to make sure the output is indeed a JSON, and then another validator to make sure the JSON complies with my JSON schema.grammar
makes all of that redundant because it is 100% guaranteed to generate the desired output (including JSON).
your comment is so insightful, thank you. if there are resources I can read/watch to learn about this stuff, I’d be happy if you could share them.