• ancap shark@lemmy.today
    link
    fedilink
    arrow-up
    4
    ·
    7 months ago

    LMs aren’t thinking, aren’t inventing, they are predicting what is supposed to be answered next, so it’s expected that they will produce the same results every time

  • xyguy@startrek.website
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 months ago

    Only 1000 times? It’s interesting that there’s such a bias there but it’s a computer. Ask it 100,000 times and make sure it’s not a fluke.