• a_beautiful_rhindB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Will be interesting to compare it to spicyboros and 70b dolphin. Spicy already “fixed” yi for me. I think we finally got the middle model meta didn’t release.

  • ambient_temp_xenoB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I’ve been trying out the ggufs I found today and it seems close enough to dolphin 70b at half the size.

    It pointed out that the ‘each brother’ part of the sally test could be taken to imply that they’re different sisters for each brother, and when you change the question to say ‘the brothers share the same 2 sisters’ it gets it right, which is whatever, but it was interesting that it picked up that the test is ambiguous.

  • SlimxshadyxB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    How does this compare with dolphin 2.2 mistral 7b?

  • WolframRavenwolfB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I took a short break from my 70B tests (still working on that!) and tried TheBloke/dolphin-2_2-yi-34b-GGUF Q4_0. It instantly claimed 4th place on my list.

    A 34B taking 4th place among the 13 best 70Bs! A 34B model that beats 9 70Bs (including dolphin-2.2-70B, Samantha-1.11-70B, StellarBright, Airoboros-L2-70B-3.1.2 and many others). A 34B with 16K native context!

    Yeah, I’m just a little excited. I see a lot of potential with the Yi series of models and proper finetunes like Eric’s.

    Haven’t done the RP tests yet, so back to testing. Will report back once I’m done with the current batch (70Bs take so damn long, and 120B even more so).

    • iChristB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Wow i gotta try it thanks for the hype! Does the GPTQ/AWQ versions differ from GGUF in terms of context? It listed that the context is only 4096