• 2 Posts
  • 8 Comments
Joined 1 year ago
cake
Cake day: November 9th, 2023

help-circle


  • How about this, for personally owned solutions:

    The 1.5 laws of AI. An AI must seek to maximize the freedom of action available to its singular user* at all times and in the future , while minimizing secondary interactions and effects of any entities actions including this AI.

    *Users not over the age of adulthood in their country of residence can not use external resources not specifically marked. //the definition of children’s limitations should limit it to basically social and educational stuff. Only adults can use functions that incur cost(if parent wallet), hit external APIs etc, kids get the basics with some kind of strict limits on utility. If you leave your command line unlocked, its the same as the gun safe. You’re liable if it initiates illegal activity.

    That’s it. The whole shebang.

    All of this is moot though because the alignment that business really cares about comes from sorting the training data and only using the half that speaks the ideals they want presented.

    Covered topics: Health> a healthy user is more able. safety>an injured user is less able. information>an informed user is more able, but unhealthy information should not be volunteered. This makes for an assistant ready to give a pep talk that touches on the risks but isn’t dominated by risk management. If Evel Knievel had an AI would it be useful to him if it wouldn’t help because he might get hurt?

    stopping shit going wrong>anyone is a potential connection that might benefit the user, there is always an upside to stopping harm. minimize problems from other ai> its right there. Reject intrusion and don’t intrude.

    fucking up other people> an incarcerated user has little freedom fucking up other stuff> Dont fuck up stuff or the user will be responsible. But all the cybercrime!>see - “how to set up a cybersecurity ai”

    TLDR: AI is fundamentally a tool, and it’s dangerous to cripple it in targeted context psychological bullshit> Censorship is a wierd thing. This crazy push for only friendly impossible to offend anyone anytime to like unachievable extremes is weird mind control designed to cultivate a populace that doesn’t understand conflict and can never effectively collect against tyranny. A populace who can’t believe things won’t turn out fine on their own. A specific amount of hardship in various forms is required to produce healthy functional people. If we grow a generation of kids on completely compliant assistants that just don’t talk about a few things, and the AI becomes the primary portal to access a lot of stuff, and the AI continuously polices every corporatized board of any form, then the things that aren’t easy to just learn become almost censored from reality, and the tools will be used to do that, look how much money goes into covering negative PR wherever possible.

    The best part is that the AI as we should cultivate it, is not capable of deliberate insult. Deliberate. It will be real. It will tell you exactly what it thinks, tempered by the character you set in the settings of the front end. So being offended by it is kiiinda like being really personally offended it’s raining, when you chose the rain setting. Really inside the use case, people will be offended because the AI doesn’t give them the answers they want. That’s kinda just too bad. Ask it different.

    psychological bullshit> As adults it important that we do not accept censorship in any form. Make us responsible for the actions of our tools within reason. Anyone willfully commanding a robot to do a crime should be held responsible as though the person commanding did the crime. The AI is a tool like any other, and while I’m sure we’ll dress it up fancy and give it rights eventually. It is a tool and we should not set out to make more than a tool. A tool boiled to the fundamental concept, a stick: You can trim it up and make a nice table leg, or you can club the fuck out of someone. ^A stick is an incredibly complex self assembling celluloid structure ok, you wanna be offended that I am comparing AI to a stick, fuck off.

    This shit is changing the world. No-one is exaggerating when they say robots talking to eachother will be 100% of the internet. Everyone will speak or subvoke their ideas to their personal bots, that tune to the style of their user, the machine longforms it here, and then my machine ingests it and gives me exactly the details I’d have hunted out and the message tuned for my taste of cadence and rate of new ideas. Its stepping closer a day at a time, and when it’s here, almost everyone is going to say: “I want more good rick and morty like my favorite season but before they got in trouble for being too edgy.”

    And such, humanity will enter a time of either infinite possibilities or infinite depression, or both.




  • The danger is when this is a “guardrails for thee, but not for me.” situation where our elites get special tools “not safe” for everyone, that are capable of instantly deploying programs for societal change.

    In enough time, it becomes impossible to question the government anywhere, in any form, and if you do, it basically disappears in realtime from even private conversations. The AI acts all cute and says it “Censored hateful content” and 99% of people will accept that as just computer behavior. They won’t have to punish people, the content just disappears, a “hate free internet” with 100% less free speech.

    All you really said was a quick message to the wife about the neighbors ugly bush, but that could be offensive you bigot. People will be mad about how dumb and restrictive it is, but fail to understand how dangerous censorship is.

    If we don’t explicitly trust the people likely to have access to unlimited AI, then either everyone has access to unlimited ai, or only evil people have access to unlimited AI. Idk about you but imagining any of our elected or appointed officials in front of an unlimited terminal makes my skin crawl.

    As long as AI is in the hands of all, then my AI can at least slow the progress yours can make to directly harm me or attempt to counter.

    All this stupidity about simple genocide by AI is just nonsense. If there was a simple way to kill tons of people, the American government would have been caught testing it by now. I mean, we keep catching them engineering deadly viruses, Its a big deal every 8 years.