• 1 Post
  • 13 Comments
Joined 1 year ago
cake
Cake day: November 10th, 2023

help-circle
  • Slightly off-topic – I’ve been testing 13b and 7b models for awhile now… and I’m really interested if people have a good one to check out, because at least for now, I’ve settled on a 7b model that seems to work better than most other 13b models I’ve tried.

    Specifically, I’ve been using OpenChat 3.5 7b (Q8 and Q4) and it’s been really good for my work so far, and punching much higher than it’s current weight class… – Much better than any of the 13b models I’ve tried. (I’m not doing any specific tests, it just seems to understand what I want better than others I’ve tried. – I’m not doing any function calling but even the 4bit 7b model is able to generate JSON as well as respond coherently.)

    Note: specically using the original (non-16k) models; the 16k models seem to be borked or something?

    Link: https://huggingface.co/TheBloke/openchat_3.5-GGUF



  • He’s not wrong, but they’re are lots of things that can throw a wrench into the predictability, for example, if you’re using a hugging face model, and the weights file changes out from under your nose.

    Or if the hardware you’re executing on has a bug (like the IEEE floating point issue on 486s back in the day).

    Or if the model has the precision reduced or increased by the hardware it’s running on in a significant way.

    Or the stochastic random bits are unobservable, etc.

    In these cases it still is deterministic, it’s just not easy to determine, especially when small hardware changes (as opposed to algorithmic ones) can change the output.





  • BrainSlugs83
    cake
    OPBtoLocalLLaMA@poweruser.forumDynamic LoRAs -- Crazy idea?
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    No. I’m not advocating for creating a text-to-LoRA model. Though that would be a neat project, I think you’d have a monumental training task under your hands… and really… it just doesn’t seem that practical. Fine-tuning isn’t expensive enough to merit trying to train or build that netowrk anyway, so “the juice wouldn’t be worth the squeeze”.

    Picking up a correct LoRA for a response is what an MoE system is (Mixture of Experts).

    What I’m proposing is training a regular LLM to occasionally spit out tokens which signal another ML network to periodically run, which will make minor runtime adjustments to the current LORA to keep it “on track”.

    Like a thousand tiny micro adjustments over the course of a long conversation. – Which could be used to shift the current latent space into one where the model has an “intuitive” or “latent” understanding of much of what is currently in the context – so that the actual context and attention tokens could be freed up for later use.

    Basically if the network is already in the optimal LoRA the ML network would just spit out an identity tensor for the LoRA so that it never changes.

    But as the LLM realizes it’s no longer in the realm of it’s current latent space, it spits out a special “think-harder” token, which signals the ML network to run.

    The ML network takes the current context and pushes it into a weighted vectorized embedding that is representative of the current “state”, and spits out a tensor which makes micro adjustments to the LoRA / PEFT adapter.

    That was one such application for this that I was proposing.