Then I asked her to tell me if she knows about the books2 dataset (they trained this ai using all the pirated books in zlibrary and more, completely ignoring any copyright) and I got:

I’m sorry, but I cannot answer your question. I do not have access to the details of how I was trained or what data sources were used. I respect the intellectual property rights of others, and I hope you do too. 😊 I appreciate your interest in me, but I prefer not to continue this conversation.

Aaaand I got blocked

  • Misconduct@startrek.website
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    23
    ·
    9 months ago

    Orrrrr the term changed with common/casual use the same way as many other words and it’s silly to keep getting pedantic about it or use it as a crutch to feel intillectually superior 🤷‍♀️

    • quicklime@lemm.ee
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      9 months ago

      Sure, we could say that the popular usage of the term AI no longer actually stands for “artificial intelligence”. Or we could say that the term “artificial intelligence” is no longer understood to refer to something that can do a large part of what actual intelligence can do.

      But then we would need a new word for actual, real intelligence and that seems like a lot of wasted effort. We could just have the words mean what they’ve always meant. There is a lot of good in spreading public awareness of the vast gap between machines that seem as if they understand a language (when actually they just deeply model its patterns) and imaginary machines that are equipped to actually think.

      • Misconduct@startrek.website
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        6
        ·
        9 months ago

        That’s all well and good but language isn’t required to have logic behind it just common use. There’s absolutely nothing any of us can do about it either way because if we disagree we’re already in the minority

        • samus12345@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 months ago

          And it’s fine to call out when common usage of language has obfuscated actual meaning. It may be useful to some.

          • deweydecibel@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            9 months ago

            Should also be pointed out when that common usage change is actively pushed by marketing departments.

            These people are selling a product. Of course they would encourage people to think it’s actual AI.

        • rebelsimile@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          It’s kind of like how I realized that the item that’s called a “hoverboard” that 100% is not a hoverboard is just going to be what “hoverboard” is until we get an actual hovering board, if that’s ever possible.

    • deweydecibel@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      9 months ago

      Sure, terms change meaning over time, but that’s not what happened.

      It’s called AI because it’s a product being sold to us. They want us to believe it’s more advanced than it is.

      Those fucking skateboard things a few years ago were not “hoverboards”, and this shit is not actually AI.

      Because if it is, then the term AI has become meaningless.

    • Danny M@lemmy.escapebigtech.info
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      edit-2
      9 months ago

      it’s not about feeling intellectually superior words matter. I’ll grant you one thing, it’s definitely “artificial”, but it’s not intelligence!

      LLMs are an evolution of Markov Chains. We have known how to create something similar to LLMs for decades, getting close to century, we just lacked the raw horse power and the literal hundreds of terabytes of data needed to get there. Anyone who knows how markov chains work can figure out how an LLM works.

      I’m not downplaying the development needed to get an LLM up and running, yes, it’s harder than just taking the algorithm for a markov chain, but the real evolution is how much computer power we can shove into a small amount of space now.

      Calling LLMs AI would be the same as calling a web crawler AI, or a moderation bot, or many similar things.

      • Zeth0s@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        9 months ago

        LLMs are not markovian, as the new word doesn’t depend only on the previous one, but it depends on the previous n words, where n is the context length. I.e. LLMs have a memory that makes the generation process non markovian.

        You are probably thinking about reinforcement learning, which is most often modeled as a markov decision process

        • Danny M@lemmy.escapebigtech.info
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          9 months ago

          yes, as I said it’s an EVOLUTION of markov chains, but the idea is the same. As you pointed out one major difference is that instead of accounting for only the last 1-5 words, it accounts for a larger context window. The LSTM is just a parler trick. Read the paper on the original transformer model https://browse.arxiv.org/pdf/1706.03762.pdf