I have been playing with llms for novel writing. Thus far all I have been able to use them for is brainstorming. No matter the model I use the prose feels wooden, dull, and obviously AI.
Is anyone else doing this? Are there particular models that work really well or any prompts you recommend? Any workflow advice you have to better leverage llms in any way would be very appreciated!
Play with your sampler settings. The impact in creativity changes pretty significantly.
See this, for example:
The important elements are:
- Min P, which sets a minimum % relative to the top probability token. Go no lower than 0.03 for coherence at higher temps.
- Temperature, which controls how much the smaller probability options are considered and makes them more probable.
I agree. I have these for yichat34 --top-k 0 --min-p 0.05 --top-p 1.0 --color -t 5 --temp 3 --repeat_penalty 1 -c 4096 -i -n -1
I think the --min-p I have is a bit low, so maybe you have the min-p back to front? Lower is more precise I think.
What models are you using? I’ve had no luck with anything. Actually that orca-mini 3b is good at writing things matter-of-factly but it doesn’t go into great detail about anything.