• GrappleHat@lemmy.ml
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    6 months ago

    I’m very skeptical that this “model poisoning” approach will work in practice. To pull it off would require a very high level of coordination among disparate people generating the training data (the images/text). I just can’t imagine it happening. Add to that: big tech has A LOT of resources to play this cat & mouse game.

    I hope I’m wrong, but I predict big tech wins here.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    6 months ago

    No, because a method that works on one implementation almost certainly doesn’t work on another.

  • General_Effort@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    4
    ·
    6 months ago

    This doesn’t have anything to do with tracking. This is supposed to sabotage free and open image generators (ie stable diffusion). It’s unlikely to do anything, though.

    Hard to say what the makers want to achieve with this. Even if it did work, it would help artists just as much, as better DRM would help programmers. On its face, this is just about enforcing some ultra-capitalist ideology that wants information to be owned.

    • CheeseNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      edit-2
      6 months ago

      I see it as trying to combat the dystopia where not only is our data scraped but now every single thing we write, draw or film is fed into an AI that will ultimately be used to create huge amounts of wealth for very few, essentially monetizing our very existence online in a way thats entierly unavoidable and without consent.

      In addition its entierly one way, google and others can grab as much of our data as they want while most of us would have an extremely hard time even getting granted a freedom of information request about ourselves, let alone grabbing a similar amount of data about those same corporations.

      • General_Effort@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        3
        ·
        6 months ago

        that will ultimately be used to create huge amounts of wealth for very few,

        But… That is what these poisoning attacks are fighting for. They are attacking open image generators that can be used by anyone. You can use them for fun or for business, without having to pay rent to some owner who is not lifting a finger. What do you think will happen if you knock that out?

        • Amerikan Pharaoh@lemmygrad.ml
          link
          fedilink
          arrow-up
          2
          ·
          6 months ago

          If it uses my data and hasn’t paid for my data, it’s stealing from me. You don’t get to have it both ways; either we can have a communist system where I don’t need to worry about my bottom line anymore; or we can have this capitalist bullshit and you can fuckin pay me for every time your machine’s data-gripper reaches into my metaphorical pockets.

  • Zerush@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    6 months ago

    For image tracking it’s enough to use Imgur for sharing, for any image, even own ones, no AI image needed. I miss the bot in Lemmy which redirects Videos to Piped, when Imgur is worst. Better alternatives, like File Coffee or Vgy.me, made in the EU are desirable.

  • darkphotonstudio@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    6 months ago

    Yes, we need more artists defending capitalism with futile, annoying, and inaffective attempts at DRM. I guess we didn’t learn anything from the music DRM wars in the 00s.