I tried to apply a lot of prompting techniques in 7b and 13b models. And no matter how hard I tried, there was barely any improvement.

  • AnonymousD3vilB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I’ve had success with 7b Llama2 for multiple prompt scenarios. Make sure you are defining the objective clearly.

    At first after reading your post, I thought you’re talking about something even smaller (phi-1/tiny llama).

  • gamesntechB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I’ll be honest, this question and the answers here are a classic example of llm promoting. What would be very useful is some examples of what you tried and what challenges you faced with those trials so people can give more informed and targeted advice.

  • No-Belt7582B
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Most of the times issue is with prompt template, especially with the spaces ###instruction vs ### instruction etc.

    Smaller models need good prompt, I tried with newer version of mistral 2.5 7B prompts work superbly on that.