Just this guy, you know?
Training new models is already the domain of large actors only, simply due to the GPU requirements, which serve as a massive moat. That ship has sailed. There isn’t a single open source model, today, that wasn’t trained by a corporate entity first, and then only fined tuned by the community later.
You don’t need AI for any of that. Determined state actors have been fabricating information and propagandizing the public, mechanical Turk style, for a long long time now. When you can recruit thousands of people as cheap labour to make shit up online, you don’t need an LLM.
So no, I don’t believe AI represents a new or unique risk at the hands of state actors, and therefore no, I’m not so worried about these technologies landing in the hands of adversaries that I think we should abandon our values or beliefs Just In Case. We’ve had enough of that already, thank you very much.
And that’s ignoring the fact that an adversarial state actor having access to advanced LLMs isn’t somehow negated or offset by us having them, too. There’s no MAD for generative AI.
Really? I’m supposed to believe AI is somehow more existentially risky than, say, chemical or biological weapons, or human cloning and genetic engineering (all of which are banned or heavily regulated in developed nations)? Please.
I understand the AI hype artists have done a masterful job convincing everyone that their tech is so insanely powerful (and thus incredibly valuable to prospective investors) that it’ll wipe out humanity, but let’s try to be realistic.
But you know, let’s take your premise as a given. Even despite that risk, I refuse to let an unknowable hypothetical be used to hold our better natures hostage. The examples are countless of governments and corporations using vague threats as a way to get us to accept bad deals at the barrel of a virtual gun. Sorry, I will not play along.
You know what?
I’m fine with that hypothetical risk.
“The bad guys will do it anyway so we need to do it, too” is the worst kind of fatalism. That kind of logic can be used to justify any number of heinous acts, and I refuse to live in a world where the worst of us are allowed to drag down the rest of us.
deleted by creator
For the record, I deleted the comment you replied to because I realized I was wrong in that both Tesla and the quoted manual, above, urge the removal of tree sap and so forth immediately, something I hadn’t caught in my first reading.
Having recognized that I realized I hadn’t considered the more fundamental point that I called out in my other comment (that the fact that the Cybertruck finish requires the same treatment as a regular car is in fact an indictment of the quality of the Cybertruck’s exterior, not a justification for it), hence the new reply.
Yes, but you see the difference is my car is expected to rust because it’s not made of supposedly stainless steel.
So I fully expect to have to protect my car’s finish. That’s why it’s painted. The Cybertruck doesn’t even have a clear coat. One would naturally thus expect that, unlike my regular non-stainless steel car, the Cybertruck wouldn’t in fact rust.
Please try to keep your criticisms of Musk fair and unbiased. Otherwise, you risk weakening your point.
Thank you for your unsolicited advice. I’m sure next time I’ll keep it in mind while having meaningless arguments with anonymous internet strangers.
deleted by creator
deleted by creator
The damn maintenance manual tells owners to carefully remove anything remotely corrosive (including, among other things, tree sap). Given Tesla knows the material is subject to rust, I think it’s a bit more than just some confused owners.
Not just more stiff, the sharp angles on the body are also much more likely to cause serious injury to pedestrians and cyclists (there’s a reason modern vehicles have rounded edges). Unfortunately the lack of regulations in North America on safety features vis a vis anyone but the vehicle occupants means these death machines remain street legal.
Solution is simple: tax and regulate. These large vehicles come with externalities including contributing to global warming, increased road wear, increased use of road and parking space, and higher rates of pedestrian injury and fatality.
So, tax them so the owners pay for those externalities, and/or regulate to prevent them in the first place. This is an entirely solvable problem if governments, and the people they represent, really care.
deleted by creator
Bruh, do you really think the author doesn’t know who one of the largest IT agencies in the world is? Could it be, rather, that they were dumbing it down for the audience, since it’s, you know, not an article about Accenture, and ended up with some slightly odd phrasing as a consequence?
Removed by mod
As a former product manager where the CEO led the sales team, I feel seen.
Until one of these AIs just starts selling other people’s work as its own, and no I don’t mean derivative work I mean the copyrighted material, nobody is breaking the rules here.
Except of course that’s not how copyright law works in general.
Of course the questions are 1) is training a model fair use and 2) are the resulting outputs derivative works. That’s for the courts to decide.
But in general, just because I publish content on my website, does not give anyone else license or permission to republish that content or create derivative works, whether for free or for profit, unless I explicitly license that content accordingly.
That’s why things like Creative Commons exists.
But surely you already knew that.
Oh, well, you’ve clearly done the kind of deep and thoughtful analysis that would allow you to determine the general opinions of all Lemmy users. My mistake. Carry on.
Hah I… think we’re on the same side?
The original comment was justifying unregulated and unmitigated research into AI on the premise that it’s so dangerous that we can’t allow adversaries to have the tech unless we have it too.
My claim is AI is not so existentially risky that holding back its development in our part of the world will somehow put us at risk if an adversarial nation charges ahead.
So no, it’s not harmless, but it’s also not “shit this is basically like nukes” harmful either. It’s just the usual, shitty SV kind of harmful: it will eliminate jobs, increase wealth inequality, destroy the livelihoods of artists, and make the internet a generally worse place to be. And it’s more important for us to mitigate those harms, now, than to worry about some future nation state threat that I don’t believe actually exists.
(It’ll also have lots of positive impact as well, but that’s not what we’re talking about here)