Anyone that thought AI was never going to be used militarily is a fool. The only question was “How soon?”
It already is. Autonomous systems have been around for a while.
Soon soon.
[booing]
Dollar signs cha-chinging wildly
“What is thy bidding, my master?”
It’s a disaster! Skywalker we’re after!
What if he could be turned to the dark side?
He will join us or die!
We got death star, we got death star! We got death star, we got death star! We got death star, we got death star!
If it gets out of hand just ask it to play tic tac toe.
A strange game. The only winning move is not to play. How about a nice game of chess?
This is the best summary I could come up with:
OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.
“We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” OpenAI spokesperson Niko Felix said in an email to The Intercept.
Suchman and Myers West both pointed to OpenAI’s close partnership with Microsoft, a major defense contractor, which has invested $13 billion in the LLM maker to date and resells the company’s software tools.
The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; the Pentagon is still tentatively exploring how it might use ChatGPT or other large-language models, a type of software tool that can rapidly and dextrously generate sophisticated text outputs.
While some within U.S. military leadership have expressed concern about the tendency of LLMs to insert glaring factual errors or other distortions, as well as security risks that might come with using ChatGPT to analyze classified or otherwise sensitive data, the Pentagon remains generally eager to adopt artificial intelligence tools.
Last year, Kimberly Sablon, the Pentagon’s principal director for trusted AI and autonomy, told a conference in Hawaii that “[t]here’s a lot of good there in terms of how we can utilize large-language models like [ChatGPT] to disrupt critical functions across the department.”
The original article contains 1,196 words, the summary contains 254 words. Saved 79%. I’m a bot and I’m open source!
What is my purpose?
whoops human made horrors beyond our comprehension
So… Don’t Be Evil 2.0
Just a reminder that last year US military failed with Clearview database powered AI glasses for soldiers for more efficient killing, and Boston Dynamics contracted by military failed at autonomous war robots.
The faster NATO’s military and infrastructure existence are fully neutralised as a humanity wiping level threat, the better it is for the rest of the world. This is about as comical as Eren Jaeger (USA) genociding rest of the world to protect his handful friends.
There’s still a bunch of nukes swimming around. I don’t think AI is a humanity-wiping level threat so much as it is a way of dehumanizing conflicts.
We should let AI control the nukes too
AI powered murder robots are a threat to humanity. And notOpenAI is transparent with its close relations to US military.
Edit: GrapheneOS alts continue to do vote manipulation with 4-7 alts on my comments (~2 months now), these are not organic votes. They used to witch hunt everyone on Reddit for years too. More people need to know about this and purge this disease from the internet.
Evidence?
You know notOpenAI and Microsoft have a very well known collaboration product called BingGPT? And that Microsoft is a US military contract partner?
And this? https://www.biometricupdate.com/202108/clearview-ai-wins-a-military-facial-recognition-contract