Messing around with Yi-34B based models (Nous-Capyabara, Dolphin 2.2) lately, I’ve been experiencing repetition in model output, where sections of previous outputs are included in later generations.
This appears to persist with both GGUF and EXL2 quants, and happens regardless of Sampling Parameters or Mirostat Tau settings.
I was wondering if anyone else has experienced similar issues with the latest finetunes, and if they were able to resolve the issue. The models appear to be very promising from Wolfram’s evaluation, so I’m wondering what error I could be making.
Currently using Text Generation Web UI with SillyTavern as a front-end, Mirostat at Tau values between 2~5, or Midnight Enigma with Rep. Penalty at 1.0.
Did you try disabling the BOS token?
Yes, the BOS token is disabled in my parameters
On EXL2, when it started doing that, I cranked the temp to 2.0 rather than using dynamic temperature. That made it go away. Going to try higher rep pen next and see what happens. I’m at 8k context and it’s doing it.