Artificial intelligence (AI) tools are now widely employed worldwide, assisting both engineers and non-expert users with a wide range of tasks. Assessing the safety and reliability of these tools is thus of utmost importance, as it could ultimately help to better regulate their use.
People keep talking about this work as if it offers some meaningful insight into how these systems behave, when the whole premise requires a fundamental misunderstanding of what GPT is actually doing.
The idea that you could present it with information but ask it not to use that information is absurd. At the end of the day it’s a sophisticated word association engine. It doesn’t have intent, it isn’t capable of strategy.
This is like pointing out that a dog which has learned to shake hands is actually deceiving you, and it doesn’t really mean any of the social things we use handshakes for (except it’s worse, because the dog is actually capable of being sociable)
People keep talking about this work as if it offers some meaningful insight into how these systems behave, when the whole premise requires a fundamental misunderstanding of what GPT is actually doing.
The idea that you could present it with information but ask it not to use that information is absurd. At the end of the day it’s a sophisticated word association engine. It doesn’t have intent, it isn’t capable of strategy.
This is like pointing out that a dog which has learned to shake hands is actually deceiving you, and it doesn’t really mean any of the social things we use handshakes for (except it’s worse, because the dog is actually capable of being sociable)