I don’t think that will have the impact people think it will, maybe at first, but eventually it’ll just start treating “wrong” code as a negative and reference it as a “how NOT to do things” lmao
For sure, but just like that whole “poison our pictures” from artists thing, the people building these models (be it a company or researchers or even hobbyists) are going to start modifying the training process so that the AI model can recognize bad code. And that’s assuming it doesn’t already, I think without that capability from the getgo the current models would be a lot worse at what they generate than they are as is lmao
I don’t think that will have the impact people think it will, maybe at first, but eventually it’ll just start treating “wrong” code as a negative and reference it as a “how NOT to do things” lmao
It needs to understand that that code is bad to be able to do that though
That’s just a matter of properly tagging the training data, which AI trainers need to do regardless.
For sure, but just like that whole “poison our pictures” from artists thing, the people building these models (be it a company or researchers or even hobbyists) are going to start modifying the training process so that the AI model can recognize bad code. And that’s assuming it doesn’t already, I think without that capability from the getgo the current models would be a lot worse at what they generate than they are as is lmao