• 1EvilSexyGeniusB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Wow 🤯

    They’re warning the world about the dangers of AI when they’re the only ones who seem to have control of it. Who knows wtf they’ve created behind the scenes and has told no one! We won’t find out until there’s a news report about their super intelligent being escaping their confines

  • Biggest_CansB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Oh this is the essence of OpenAI/Bing

    Dystopian machine

    An ironic reality I think that in their efforts to make the “safe” AI they are robbing it of a dangerous amount of reality; to the degree that it has the most potential to cause harm

  • FPhamOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Just to clarify : it was a new chat on 3.5. no prior text.

    Of course I know, I can GET it to write bad grammar. But the whole thing threw me a bit off- because this seemed quite an harmless request.

    It adds unnecessary “prompting” where I need to force it to do something for me. In this case instead of 1 simple message I had to invent a story and write more explanation. That’s not a tool, that’s a baby safety lock. Except I’m not a baby anymore.