![](https://fedia.io/media/0d/90/0d9097fcd085a5a00c935073e45acc5736f8f471cfdec99dfe7b6d12f3dd3710.png)
![](https://lemmy.ml/pictrs/image/d3d059e3-fa3d-45af-ac93-ac894beba378.png)
Yup. Fortunately unsubscribing from politics subreddits is generally advisable whether one has been banned from them or not.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
Yup. Fortunately unsubscribing from politics subreddits is generally advisable whether one has been banned from them or not.
Being slightly wrong means more of an endorphin rush when people realize they can pounce on the flaw they’ve spotted, I guess.
Don’t sweat downvotes, they’re especially meaningless on the Fediverse. I happen to like a number of applications for AI technology and cryptocurrency, so I’ve certainly collected quite a few of those and I’m still doing okay. :)
There was a politics subreddit I was on that had a “downvoting is not allowed” rule. There’s literally no way to tell who’s downvoting on Reddit, or even if downvoting is happening if it’s not enough to go below 0 or trigger the “controversial” indicator.
I got permabanned from that subreddit when someone who’d said something offensive asked “why am I being downvoted???” And I tried to explain to them why that was the case. No trial, one million years dungeon, all modmail ignored. I guess they don’t get to enforce that rule often and so leapt at the opportunity to find an excuse.
Downvotes for not getting it right, I presume.
Which makes me concerned that the “Hole for Pepnis” answer has so many upvotes.
Those holes look open to me.
Especially because seeing the same information in different contexts helps mapping the links between the different contexts and helps dispel incorrect assumptions.
Yes, but this is exactly the point of deduplication - you don’t want identical inputs, you want variety. If you want the AI to understand the concept of cats you don’t keep showing it the same picture of a cat over and over, all that tells it is that you want exactly that picture. You show it a whole bunch of different pictures whose only commonality is that there’s a cat in it, and then the AI can figure out what “cat” means.
They need to fundamentally change big parts of how learning happens and how the algorithm learns to fix this conflict.
Why do you think this?
There actually isn’t a downside to de-duplicating data sets, overfitting is simply a flaw. Generative models aren’t supposed to “memorize” stuff - if you really want a copy of an existing picture there are far easier and more reliable ways to accomplish that than giant GPU server farms. These models don’t derive any benefit from drilling on the same subset of data over and over. It makes them less creative.
I want to normalize the notion that copyright isn’t an all-powerful fundamental law of physics like so many people seem to assume these days, and if I can get big companies like Meta to throw their resources behind me in that argument then all the better.
Remember when piracy communities thought that the media companies were wrong to sue switch manufacturers because of that?
It baffles me that there’s such an anti-AI sentiment going around that it would cause even folks here to go “you know, maybe those litigious copyright cartels had the right idea after all.”
We should be cheering that we’ve got Meta on the side of fair use for once.
look up sample recover attacks.
Look up “overfitting.” It’s a flaw in generative AI training that modern AI trainers have done a great deal to resolve, and even in the cases of overfitting it’s not all of the training data that gets “memorized.” Only the stuff that got hammered into the AI thousands of times in error.
It’s social media. Social media is all about bubbles, groupthink, driving engagement. It happens on Facebook, it happens on Reddit.
It happens here, too. There are certain views that are accepted as what every right-thinking person holds, and certain other views that are dumped on with great glee about how wrong they are. But which specific views they are varies from bubble to bubble.
Training an AI does not involve copying anything so why would you think that fair use is even a factor here? It’s outside of copyright altogether. You can’t copyright concepts.
Downloading pirated books to your computer does involve copyright violation, sure, but it’s a violation by the uploader. And look at what community we’re in, are we going to get all high and mighty about that?
What did I say that implied that? I’m pointing out a contradiction in kilgore’s comment, I’m not adding anything of my own here.
Their distribution of books is completely legal.
Corporations just have more money to warp the laws in their favour.
You just contradicted yourself in two sentences.
But I think the law is pretty clear, and a precedent calling their use case fair use would be mind blowing. You need new, much more common sense IP legislation that redefines consumer rights in a digital world.
Indeed. I’m a big supporter of IA’s mission, and I’m a big supporter of piracy (copyright has gone insane over the years), but this outcome was obvious from the moment IA did this and it was a mistake for them to fight this fight. They should focus on preservation. Let the EFF handle the lawsuits, and let Library Genesis handle the illegal distribution of books. Everyone focus on what they’re best at.
They’re appealing the decision so there’s still opportunity for IA to throw good money after bad on this.
Even if you trained the AI yourself from scratch you still can’t be confident you know what the AI is going to say under any given circumstance. LLMs have an inherent unpredictability to them. That’s part of their purpose, they’re not databases or search engines.
if I were to download a pre-trained model from what I thought was a reputable source, but was man-in-the middled and provided with a maliciously trained model
This is a risk for anything you download off the Internet, even source code could be MITMed to give you something with malicious stuff embedded in it. And no, I don’t believe you’d read and comprehend every line of it before you compile and run it. You need to verify checksums
As I said above, the real security comes from the code that’s running the LLM model. If someone wanted to “listen in” on what you say to the AI, they’d need to compromise that code to have it send your inputs to them. The model itself can’t do that. If someone wanted to have the model delete data or mess with your machine, it would be the execution framework of the model that’s doing that, not the model itself. And so forth.
You can probably come up with edge cases that are more difficult to secure, such as a troubleshooting AI whose literal purpose is messing with your system’s settings and whatnot, but that’s why I said “99% of the way there” in my original comment. There’s always edge cases.
Ironically, as far as I’m aware it’s based off of research done by some AI decelerationists over on the alignment forum who wanted to show how “unsafe” open models were in the hopes that there’d be regulation imposed to prevent companies from distributing them. They demonstrated that the “refusals” trained into LLMs could be removed with this method, allowing it to answer questions they considered scary.
The open LLM community responded by going “coooool!” And adapting the technique as a general tool for “training” models in various other ways.
That would be part of what’s required for them to be “open-weight”.
A plain old binary LLM model is somewhat equivalent to compiled object code, so redistributability is the main thing you can “open” about it compared to a “closed” model.
An LLM model is more malleable than compiled object code, though, as I described above there’s various ways you can mutate an LLM model without needing its “source code.” So it’s not exactly equivalent to compiled object code.
Fortunately, LLMs don’t really need to be fully open source to get almost all of the benefits of open source. From a safety and security perspective it’s fine because the model weights don’t really do anything; all of the actual work is done by the framework code that’s running them, and if you can trust that due to it being open source you’re 99% of the way there. The LLM model just sits there transforming the input text into the output text.
From a customization standpoint it’s a little worse, but we’re coming up with a lot of neat tricks for retraining and fine-tuning model weights in powerful ways. The most recent bit development I’ve heard of is abliteration, a technique that lets you isolate a particular “feature” of an LLM and either enhance it or remove it. The first big use of it is to modify various “censored” LLMs to remove their ability to refuse to comply with instructions, so that all those “safe” and “responsible” AIs like Goody-2 can turned into something that’s actually useful. A more fun example is MopeyMule, a LLaMA3 model that has had all of his hope and joy abliterated.
So I’m willing to accept open-weight models as being “nearly as good” as a full-blown open source model. I’d like to see full-blown open source models develop more, sure, but I’m not terribly concerned about having to rely on an open-weight model to make an AI system work for the immediate term.
They’re not claiming it’s AGI, though. You’re missing a broad middle ground between dumb calculators and HAL 9000.
And sometimes that’s exactly what I want, too. I use LLMs like ChatGPT when brainstorming and fleshing out fictional scenarios for tabletop roleplaying games, for example, and in those situations coming up with plausible nonsense is specifically the job at hand. I wouldn’t want to go “ChatGPT, I need a description of the interior of a wizard’s tower is like” and get the response “I don’t know what the interior of a wizard’s tower is like.”