Hi, does anyone know of any (peer-reviewed) articles testing performance when giving LLMs a role? It’s something most of us do in prompts and it’s somewhat logical that introducing such a parameter would increase likelihood of desired output, but has anyone actually tested it in a cite-able article?
I’m thinking of the old, "You are a software engineer with years of experience in coding .html, .json … " etc.
This little bit right here is very important if you want to do work regularly with an AI
I remembered seeing an article about this a few months back, which lead to my working on an Assistant prompt, and it’s been hugely helpful.
I imagine this comes down to how Generative AI works under the hood. It ingested tons of books, tutorials, posts, etc from people who identified as certain things. Telling it to also identify as that thing could open a lot of pieces of information to it that it wouldn’t otherwise be looking at.
I always recommend that folks set up roles for their AI when working with it, because the results I’ve personally seen have been miles better when you do.