• thantik@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 months ago

    All “AI” is today is a facsimile of human intelligence by a machine who has learned to reproduce things similar to what it sees. Nothing about todays “AI” is actually intelligent.

    • AeroLemming@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      You’re right if you’re just talking about generative AI. AI agents can outsmart humans and show real intelligence in very narrow tasks. That doesn’t make them smarter than humans (there is no AGI), but they can definitely surpass human capabilities because of how specialized to a specific problem they are. For example, look at AlphaGo/AlphaStar.

  • Lugh@futurology.todayOPM
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    Some people seem wildly optimistic about AGI being around the corner. Yet there’s no indication the current approach to AI will deliver AI with independent reasoning abilities. In fact, despite decades of attempts, no one has outlined how AI might acquire reasoning ability. Without that, no AGI.

    • V ‎ ‎ @beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      10 months ago

      I predict the problem is parallel to what initially limited AI efforts in the 80-2010s, lack of information and ability to process that information. Knowing you can pull a string but not push it is a common example of reasoning that isn’t available in the context of test or static image parsing. Multimodal helps, but we need to figure out how to train without needing to retrain the entire network especially for larger datasets like video.

      • Lugh@futurology.todayOPM
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Perhaps. The problem with this line of thought is that it assumes reasoning will arise spontaneously, but doesn’t know how. It doesn’t inspire much confidence as the basis for a hypothesis.

        • V ‎ ‎ @beehaw.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          10 months ago

          Reasoning isn’t innate to organic networks either. It’s a byproduct of pattern matching generalizing to wider stimuli and recognizing the differences. Convolutional networks don’t memorize every breed of cat, they recognize the patterns (features) that define them. Reasoning is an extension of this. I can’t push a string and I can’t unscramble an egg are also patterns, the pattern of unreciprocal or irreversible relationships. Extending these to new situations is applied reasoning. Same idea as transformer models creating new poems in styles not common before, generalize patterns to new situations. Question is how do we train to accommodate generalization without detracting from accuracy and how do we replicate neuroplasticity in a digital network.