“Experts agree these AI systems are likely to be developed in the coming decades, with many of them believing they will arrive imminently,” the IDAIS statement continues. “Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity.”

  • Poplar?@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    1 month ago

    I really like this thing Yann LeCun had to say:

    “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat.” LeCun continued: “It’s as if someone had said in 1925 ‘we urgently need to figure out how to control aircrafts that can transport hundreds of passengers at near the speed of the sound over the oceans.’ It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the Atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety.  It didn’t require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements.” source

    Meanwhile there are alreay lots of issues that we are already facing that we should be focusing on instead:

    ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities. source

      • Leate_Wonceslace@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 month ago

        Yes, because that is actually entirely irrelevant to the existential threat AI poses. In AI with a gun is far less scary than an AI with access to the internet.