Hello LocalLLama.
Do you have tips how to make best use of models that have not been fine-tuned for chat or instruct?
Here’s my issue: I use LLMs for storywriting and making character profiles (I’ve been doing that a lot for D&D character sheets for example).
I feel that most models have a strong bias to make positive stories or happy endings or use really cliched phrases, or something similar. The stories have perfect grammar but they are boring and cliched as heck. Using instructions to tell it not to do that don’t work that well. I checked out r/chatgpt for what tips they have for making good stories when using ChatGPT and it seems there are no great solutions there either. Maybe this leaks to local models because bunch of them use GPT-4 derived training data, so now local models want overly positive outputs as well.
So I thought “Alright. I’ll try using a base model. Instead of giving it instructions, I’ll make it think it’s completing a book or something”.
But that also doesn’t work that well. Lllama-2-70B for example easily gets into repetitive patterns and I feel it’s even worse than using positive-biased chat or instruct-tuned model.
I’m looking for answers or insights into these following thoughts in my head:
-
Are there any base models worth using? I’ve tried Yi base models for example; seems about the same as Llama2-70B base (just faster). I’m more than willing to spend time prompt engineering in exchange for more interesting outputs.
-
Do you know resources/tricks/tips/insights about how to make best use of base models? Resources on how to prompt them? Sampler settings?
-
Why do base models seem to suck so bad, even if I’m prompting them assuming it’s just completing text and they have no concept of following instructions? Mostly I see them fall into repeating the same sentence or structure over and over again. Fine-tuned models don’t do this even if I otherwise don’t like their outputs.
-
Out of curiosity, are you aware of any models that have been fine-tuned that are not tuned for chat or instruct? Kinda wondering if anyone has found any interesting use cases.
You can use them more for completion. At least you’re supposed to. Sometimes they work in instruct modes like alpaca anyway but will give extra outputs or not follow directions.
What kind of sampler settings are you using? You can force models to get really out there in terms of creativity depending on what you use.
If you’re not using a chat/instruct tuned model you should be using the notebook, the input that the chat tab creates will be chat/instruct formatted
I always use the Raw tab, even when chatting (I look up the template manually if I’m using it chat-way). I like to see exactly what is given to the model and what it generates back. Sometimes I use command line software when I’m not using the UI.
tbh I think it’ll be a bad trade off. What you lose in steerability is huge, while I’m not convinced you’ll get any gains on less boring/overly positive. An instruction model you can at least tell it to write something dystopian.
more interesting outputs.
Try jacking up the temperature