• bankimu@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    1 year ago

    You are ruling out intelligence without (very probably) being able to define it, just because you have a vague knowledge of how it works.

    The problem in this mode of thinking is a) that you put human brains in a different pedestal, even though they follow physical processes to “predict the next word” and may be very well neural networks themselves, and b) you are ignoring data that shows intelligence in multiple areas of the more complex models because “oh it’s mindless because I know it’s predicting tokens”. c) you favor of data that shows edge cases or probably that come from lower quality models.

    You’re not alone in this line of thinking.

    Your mind is set. You’ll not recognize intelligence when you see it.

    • randomname01@feddit.nl
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      No, I’m not singling out human brains. Other animals have proven to be quite adept at problem solving as well.

      LLMs, however, just haven’t. It currently just isn’t part of how they function. In some cases they can mimic actual logic very well, but that’s about it.