Nah, bullshit, so far these LLM’s are as likely to insult or radicalize you as comfort you. That won’t ever be solved until AGI becomes commonplace, which won’t be for a long ass time. These products are failures at launch.
… Have you tried any of the recent ones? As it stands chatGPT and Gemini are both built with guardrails strong enough to require custom inputs to jailbreak, with techniques such as Reinforcement learning from Human Feedback uses to lobotomize misconduct out of the AI’s.
Oh thanks, I really wanted to read another defence of an unethical product by some fanboy with no life. I’m so glad you managed to pick up on that based on my previous comments. I love it. You chose a great conversation to start here.
The tech is great at pretending to be human. It is simply a next “word” (or phrase) predictor. It is not good at answering obscure questions, writing code or making a logical argument. It is good at simulating someone.
It is my experience that it approximates a human well, but it doesn’t get the details right (like truthness or reflecting objective reality), making it useless for essay writing, but great for stuff like character AI and other human simulations.
If you are right, give an actual Iogical response only capable by a human, as opposed to a generic ad hominem. I repeat my question, Have you actually used any of the GPT3 era models?
Nah, bullshit, so far these LLM’s are as likely to insult or radicalize you as comfort you. That won’t ever be solved until AGI becomes commonplace, which won’t be for a long ass time. These products are failures at launch.
… Have you tried any of the recent ones? As it stands chatGPT and Gemini are both built with guardrails strong enough to require custom inputs to jailbreak, with techniques such as Reinforcement learning from Human Feedback uses to lobotomize misconduct out of the AI’s.
HaVE yOu trIED iT bEfOrE? fOr SIMplE tAskS it SaVEs Me A lOt oF timE AT wOrK
JFC a skipping record plays right on queue whenever somebody speaks ill of the GPTs and LLMs.
… Don’t pull a strawman, all I said is that the AI’s designed to approximate human written text, do a good job at approximating human text.
This means you can use them to simulate a reddit thread or make a fake wikipedia page, or construct a set of responses to someone who wants comfort.
Next time, read what someone actually says, and respond to that.
Oh thanks, I really wanted to read another defence of an unethical product by some fanboy with no life. I’m so glad you managed to pick up on that based on my previous comments. I love it. You chose a great conversation to start here.
The tech is great at pretending to be human. It is simply a next “word” (or phrase) predictor. It is not good at answering obscure questions, writing code or making a logical argument. It is good at simulating someone.
It is my experience that it approximates a human well, but it doesn’t get the details right (like truthness or reflecting objective reality), making it useless for essay writing, but great for stuff like character AI and other human simulations.
If you are right, give an actual Iogical response only capable by a human, as opposed to a generic ad hominem. I repeat my question, Have you actually used any of the GPT3 era models?
They forgot to put in the quit when they built this one. You should be in the porn industry.
Indeed, I don’t think I can convince you at this point, so enjoy the touch of grass