Alien Top
  • Communities
  • Create Post
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
fakezetaB to LocalLLaMA@poweruser.forumEnglish · 2 years ago

Neural-chat-7b-v3-1 GGUF. New Mistral finetune

message-square
message-square
10
link
fedilink
1
message-square

Neural-chat-7b-v3-1 GGUF. New Mistral finetune

fakezetaB to LocalLLaMA@poweruser.forumEnglish · 2 years ago
message-square
10
link
fedilink

Couldn’t wait for the great TheBloke to release it so I’ve uploaded a Q5_K_M GGUF of Intel/neural-chat-7b-v3-1.

From some preliminary test on PISA sample questions it seems at least on par with OpenHermers-2.5-Mistral-7B

https://preview.redd.it/bkaezfb51c0c1.png?width=1414&format=png&auto=webp&s=735d0f03109488e01d65c1cf8ec676fa7e18c1d5

alert-triangle
You must log in or register to comment.
  • metalman123B
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    Intel has entered the game. Things are getting interesting.

    If we ever get access to a mistral or yi 70b± model I think a lot of companies are going to be in trouble with their current models.

  • fakezetaOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    Added also to Ollama library: in case of need.

    https://ollama.ai/fakezeta/neural-chat-7b-v3-1

  • mcmoose1900B
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    For anyone wondering, you can actually rent Gaudi from Intel’s Dev Cloud to finetune like this:

    https://eduand-alvarez.medium.com/llama2-fine-tuning-with-low-rank-adaptations-lora-on-gaudi-2-processors-52cf1ee6ce11

    https://developer.habana.ai/intel-developer-cloud/

    The blog cites $10/hour for 8 HPUs.

  • AdamDhahabiB
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    Interested to know how it scores for RAG use cases, there is a benchmark for that https://github.com/vectara/hallucination-leaderboard

    Up to now, Mistral underperforms Llama2.

    • fakezetaOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      Currently all the finetuned version of Mistral I’ve tested have a high rate of hallucination: this one also seems to have this tendency.

  • fragilesleep
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    Thank you for your work! Is it possible to download this model if I can’t run Ollama? I couldn’t find a download link or a HF repo.

  • perlthoughtsB
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    Nice, i released one too with 16k extended context.
    https://huggingface.co/NurtureAI/neural-chat-7b-v3-16k

LocalLLaMA@poweruser.forum

localllama@poweruser.forum

Subscribe from Remote Instance

Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !localllama@poweruser.forum

Community to discuss about Llama, the family of large language models created by Meta AI.

Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 4 users / day
  • 4 users / week
  • 4 users / month
  • 4 users / 6 months
  • 3 local subscribers
  • 4 subscribers
  • 1.03K Posts
  • 5.96K Comments
  • Modlog
  • mods:
  • communick@poweruser.forum
  • BE: 0.19.11
  • Modlog
  • Instances
  • Docs
  • Code
  • join-lemmy.org