• IxinDowB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 年前

    How many tokens in your substack example?
    Do you have examples of using model for fiction with length 16K-40K tokens?

  • mcmoose1900B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 年前

    Almost the same syntax as Yi Capybara. Excellent.

    I propose all Yi 34B 200K finetunes use Vincuna-ish prompt syntax, so they can ALL be merged into one hellish voltron model.

      • SomeOddCodeGuyB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 年前

        Just wanted to come back and let you know I started using this last night, and this is fantastic. I haven’t put it through much testing yet, but just know that on initial use I’m very impressed by this model for general purpose AI assistant. It’s keeping to the Assistant’s more informal speech patterns while also answering questions well and keeping up with large context. Those are 3 checkboxes I’ve never been able to check at once. This praise wont’ get much visibility since it’s an older thread, but just wanted to let you know at least.

  • mcmoose1900B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 年前

    More random feedback: you should put some combination of Yi, 34B, and or 200K in the title.

    No one tags anything on HF, so the only way to browse models is by title. I would have totally missed this in my Yi/34B searches if not for the Reddit post.

  • migtisseraOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 年前

    Just on another note, this place is just super hostile! I didn’t think it would be, considering it’s the LocalLLaMA sub-reddit and we are all here to support open source or freely available models.

    This is harsher than the Twitter mob!

    I’ll still release models, but sorry guys, not coming here again.