I tried to apply a lot of prompting techniques in 7b and 13b models. And no matter how hard I tried, there was barely any improvement.
I’ve had success with 7b Llama2 for multiple prompt scenarios. Make sure you are defining the objective clearly.
At first after reading your post, I thought you’re talking about something even smaller (phi-1/tiny llama).
what models did you try?
It’s a skill issue
I’ll be honest, this question and the answers here are a classic example of llm promoting. What would be very useful is some examples of what you tried and what challenges you faced with those trials so people can give more informed and targeted advice.
Most of the times issue is with prompt template, especially with the spaces ###instruction vs ### instruction etc.
Smaller models need good prompt, I tried with newer version of mistral 2.5 7B prompts work superbly on that.