Recently came across this AI Safety test report from LinkedIn: https://airtable.com/app8zluNDCNogk4Ld/shrYRW3r0gL4DgMuW/tblpLubmd8cFsbmp5
From this report it seems Llama 2 (7B version?) lacks some safety checks compared to OpenAI models. Same with Mistral. Did anyone find the same result? Has it been a concern for you?
It’s not clear if this is testing the chat model or the base model. Assuming it is the base model, it isn’t surprising: it’s just a text completion model with no extra frills. The point of the safety alignment training is that it’s part of the instruct dataset and training, not the base model.
This is what you want, even if you’re concerned about safety. You don’t want the safety to be baked in to the raw completion model: if some future better way comes along to do safety training, you want to be able to use it without retraining the entire model from scratch. (And given the speed at which this stuff moves, that might be just a week from now.)
Of course, if you’re concerned about safety you shouldn’t be deploying the raw text completion model to end users. (For a whole host of reasons, not just safety.)