Has anyone explored Intel’s new model yet? It’s a 7B model trained on Slim Orca, which is currently the top 7B model on the HF open LLM leaderboard.

I’ve found other 7B models to be surprisingly helpful, especially for annotation/data extraction tasks, so I’m curious if it’s worth replacing teknium/OpenHermes-2.5-Mistral-7B with this model.

  • No-Link-2778B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s data is public, but OpenHermes-2.5 dataset is gated and not accessible.

  • CardAnarchistB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I could only get pretty muddled responses from the model.

    Despite seemingly having a simple prompt template I suspect I didn’t enter all the data correctly into simpletavern as the outputs I was getting were similar to when I have a wrong template selected for a model.

    Shrugs

    If a model wants to be successful they should really pick a standard template (pref ChatML) and clearly state that’s what they are using.

  • vatsadevB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    IMPORTANT!

    this isnt trained, its another mistral finetune, with dpo, but with slimorca, not ultrachat.

    I would be using openHermes, its much more trialed, and its proven solid