• Lugh@futurology.todayOPM
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Perhaps. The problem with this line of thought is that it assumes reasoning will arise spontaneously, but doesn’t know how. It doesn’t inspire much confidence as the basis for a hypothesis.

    • V ‎ ‎ @beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Reasoning isn’t innate to organic networks either. It’s a byproduct of pattern matching generalizing to wider stimuli and recognizing the differences. Convolutional networks don’t memorize every breed of cat, they recognize the patterns (features) that define them. Reasoning is an extension of this. I can’t push a string and I can’t unscramble an egg are also patterns, the pattern of unreciprocal or irreversible relationships. Extending these to new situations is applied reasoning. Same idea as transformer models creating new poems in styles not common before, generalize patterns to new situations. Question is how do we train to accommodate generalization without detracting from accuracy and how do we replicate neuroplasticity in a digital network.