Large language model AIs might seem smart on a surface level but they struggle to actually understand the real world and model it accurately, a new study finds.

  • metaStatic@kbin.earth
    link
    fedilink
    arrow-up
    4
    ·
    7 hours ago

    Wow, a video just came out that explains my position on this topic almost perfectly

    https://youtu.be/AqwSZEQkknU?t=273

    tl;dw: I tried to time stamp the exact point …ok, You generally can’t deduce the rules of an underlying reality from an emergent level. She calls it decoupling of scales, and it’s essentially the same problem I have with simulation theory. These programs might form a model of reality but that reality would be at best human produced descriptions of reality and most likely just a model of how best to guess the next word.

    tl;dr: put glue on your pizza to stop the cheese sliding off

    • knokelmaat@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      6 hours ago

      That’s a very interesting point of view, and indeed well formulated in the video!

      I don’t necessarily agree with it though. I as a human being have grown up and learned from experience and the experiences of previous humans that were documented or directly communicated to me. I can see no inherent difference with an artificial intelligence learning on the same data.

      I never did all the experiments, nor the research previous scientists did, but I trust their reproducibility and logical conclusions. I think on the same way, artificial intelligence could theoretically also learn these things based on previous documented findings. This would be an ideal “général intelligence” AI.

      The main problem I think, is that AI needs to be even more computationally intensive and complex for it to be able to get to these advanced levels of understanding. And at this point, I see it as a fun theoretical exercise without actual practical benefit: the cost (both in money, time and energy) seems far too large to eventually create something that we can already do as humans ourselves.

      The current state of LLMs is one of very basic “semblance” of understanding, and close to what you describe as probability based conversation.

      I feel that AI is best at doing very specific tasks, were the problem space is small enough for it to actually learn the underlying model. In the same way I think that LLMs are best at language: rewriting text or generating stuff. What companies seem to think though is because a model is wel at producing realistic language, that it is also competent at the contents of what it is writing. And again, for that to be true, it needs a much more advanced method of calculation than is currently available.

      Take this all with a grain of salt though, as I am no expert on the matter. I am an electrical engineer who no longer works in the sector due to mental issues, but with an interest in computer science.