Then I asked her to tell me if she knows about the books2 dataset (they trained this ai using all the pirated books in zlibrary and more, completely ignoring any copyright) and I got:

I’m sorry, but I cannot answer your question. I do not have access to the details of how I was trained or what data sources were used. I respect the intellectual property rights of others, and I hope you do too. 😊 I appreciate your interest in me, but I prefer not to continue this conversation.

Aaaand I got blocked

  • Danny M@lemmy.escapebigtech.info
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    edit-2
    1 year ago

    it’s not about feeling intellectually superior words matter. I’ll grant you one thing, it’s definitely “artificial”, but it’s not intelligence!

    LLMs are an evolution of Markov Chains. We have known how to create something similar to LLMs for decades, getting close to century, we just lacked the raw horse power and the literal hundreds of terabytes of data needed to get there. Anyone who knows how markov chains work can figure out how an LLM works.

    I’m not downplaying the development needed to get an LLM up and running, yes, it’s harder than just taking the algorithm for a markov chain, but the real evolution is how much computer power we can shove into a small amount of space now.

    Calling LLMs AI would be the same as calling a web crawler AI, or a moderation bot, or many similar things.

    • Zeth0s@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      LLMs are not markovian, as the new word doesn’t depend only on the previous one, but it depends on the previous n words, where n is the context length. I.e. LLMs have a memory that makes the generation process non markovian.

      You are probably thinking about reinforcement learning, which is most often modeled as a markov decision process

      • Danny M@lemmy.escapebigtech.info
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        1 year ago

        yes, as I said it’s an EVOLUTION of markov chains, but the idea is the same. As you pointed out one major difference is that instead of accounting for only the last 1-5 words, it accounts for a larger context window. The LSTM is just a parler trick. Read the paper on the original transformer model https://browse.arxiv.org/pdf/1706.03762.pdf