• 0 Posts
  • 14 Comments
Joined 8 months ago
cake
Cake day: November 10th, 2023

help-circle
  • Happened to me with an even bigger instance because of an asshole admin making shit up. A solution might be to divide up the host of the user comments versus the moderator agents versus receiver of the comments. If your host bans you, that’s it, but if the receiver bans you, that only affects their users, and if a moderator agent group bans you, that only bans you from their distribution group of moderator agents but could be read by other groups.

    If a community / group-of-moderator-agents-under-a-community-tag-for-a-particular-host bans you, you’d have to find another groups of moderator agents or accept all that are allowed by your host. Accepting all allowed by your host could only realistically exclude the worst offenders - spammers, doxxers, etc - so you’d really be incentivized to find a better block of moderator agents if you want to avoid certain types of comments. People who want to live in a bubble could live in a bubble but people who want to prioritize the greatest participation would try to find the most lenient host and the most lenient moderation agents, at least to their particular sensitivities.

    It would be a truer federated model, but this is not lemmy as it is.





  • TheObviousSolution@lemm.eetoComics@lemmy.mlZionist Karen
    link
    fedilink
    arrow-up
    5
    arrow-down
    3
    ·
    2 months ago

    Nothing quite like lemmy for that, would be surprised if the troll factory mobilized on reddit’s worldnews was beginning to catch on to the troll factories on the other side of the coin mobilized on lemmy. In fact, this being in lemmy.ml, I would normally expect only to hear about pro-Hamas opinions. Guess we’ll have to wait to see which comments get removed to see if things have changed or if it was simply just an day off.





  • which could be anything and is guaranteed to upset someone with either answer.

    Funny how it only matters with certain answers.

    The reason “Why” is because it should become clear that the topic itself is actively censored, which is the possibility the original comment wanted to discard. But I can’t force people to see what they don’t want to.

    it’s just parroting whatever it’s been trained on

    If that’s your take on training LLMs, then I hope you aren’t involved in training them. A lot more effort goes into doing so, including being able to make sure it isn’t just “parroting” it. Another thing entirely is to have post-processing that removes answers about particular topics, which is what’s happening here.

    Not even being able to answer whether Gaza exists is being so lazy that it becomes dystopian. There are plenty of ways LLM can handle controversial topics, and in fact, Google Gemini’s LLM does as well, it just was censored before it could get the chance to do so and subsequently refined. This is why other LLMs will win over Google’s, because Google doesn’t put in the effort. Good thing other LLMs don’t adopt your approach on things.