Yeah, this seems like the last confirmation we didn’t really need.
What are his feelings on open source? That’s my question.
If you’re using the same UI and metadata, you should be able to reproduce images with only slight differences and then upscale them with hires fix or something else.
Doesn’t seem like it.
That’s kind of unbelievable given what they say it can do.
They said they would be open sourcing it.
That was really cool.
Those might just be LoRA merged models, not full fine-tuning. From what I heard, fine-tuning doesn’t work because the models are distilled. You’d have to find a way to undistill them to train them.
Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn’t really work.
I don’t think so. They’re going to have to do a lot better than a tutorial to win people back. That said, the two Flux models being distilled making them close to impossible to fine-tune sucks too.
Art isn’t work, it’s speech. It’s part of the human condition. Art is useless, said Wilde. Art is for art’s sake—that is, for beauty’s sake.
I do not make art, I just post it here on lemmy. I’d be OK with that. People freely create, copy, and iterate on memes, and they are the greatest cultural touchstones we have. First and foremost, people create because they have something to say.
People already make memes and mods for free. Humans are a social species and will continue to create and share things until the end of time. Making money off of creation is a privilege for only a tiny few.
The way you described is already how Civitai works. Maybe it’s to keep the moderation of the two sites cleanly separated. This way the team on green can do what ever they want, on green.
You keep moving the goal posts and putting words in my mouth. I never said you can do new things out of nothing. Nothing I mentioned is approaching, equaling, or exceeding the effort of training a model.
You haven’t answered a single one of my questions, and you are not arguing in good faith. We’re done here. I can’t say it’s been a pleasure.
Do you have any examples of how they fail? There are plenty of ways to explain new concepts to models.
https://arxiv.org/abs/2404.19427 https://arxiv.org/abs/2406.11643 https://arxiv.org/abs/2403.12962 https://arxiv.org/abs/2404.06425 https://arxiv.org/abs/2403.18922 https://arxiv.org/abs/2406.01300
What kind of creativity are you talking about then? I’ve also never heard of a bloated model. Which models are bloated?
But at what point does that guidance just become the dataset you removed from the training data?
The whole point is that it didn’t know the concepts beforehand, and no it doesn’t become the dataset. Observations made of the training data are added to the model’s weights after training, the dataset is never relevant again as the model’s weights are locked in.
To get it to run Doom, they used Doom.
To realize a new genre, you’ll “just” have to make that game the old fashion way, first.
Or you could train a more general model. These things happen in steps, research is a process.
I found this guide on how to make an inpainting model out of any model. Though it’s pretty out of date.