I made a robot moderator. It models trust flow through a network that’s made of voting patterns, and detects people and posts/comments that are accumulating a large amount of “negative trust,” so to speak.

In its current form, it is supposed to run autonomously. In practice, I have to step in and fix some of its boo-boos when it makes them, which happens sometimes but not very often.

I think it’s working well enough at this point that I’d like to experiment with a mode where it can form an assistant to an existing moderation team, instead of taking its own actions. I’m thinking about making it auto-report suspect comments, instead of autonomously deleting them. There are other modes that might be useful, but that might be a good place to start out. Is anyone interested in trying the experiment in one of your communities? I’m pretty confident that at this point it can ease moderation load without causing many problems.

!santabot@slrpnk.net

  • auk@slrpnk.netOP
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    14 days ago

    Does that mean hostile but popular comments in the wrong communities would have a pass though?

    They have no effect. The impact of someone’s upvote is dependent on how much trust from the wider community that person has. It’s a huge recursive formula, almost the same as PageRank. The upshot is that those little isolated wrong communities have no power unless the wider community also gives them some upvotes. It’s a very clever algorithm. I like it a lot.

    For normal minority communities like vegans, that’s not a problem. They still get some upvotes, because the occasional conflict isn’t the normal state, so they count as normal users. They post stuff, people generally upvote more than they downvote by about 10 to 1, and they are their own separate thing, which is fine. For minority communities that are totally isolated from interactions with the wider community, they just have more or less 0 rank, so it doesn’t matter what they think. They’re not banned, unless they’ve done something, but their votes do almost nothing. For minority communities that constantly pick fights with the wider community, they tend to have negative rank, so it also doesn’t matter what they think, in terms of the impact of them mutually upvoting each other.

    I think it might be a good idea to set up “canary” communities, vegans being a great example, with the bot posting warnings if users from those communities start to get ranked down. That can be a safety check to make sure it is working the way it’s supposed to. Even if that downranking does happen, it might be fine, if their behavior is obnoxious and the community is reacting with downvotes, or it might be a sign of a problem. You have to look up people’s profiles and look at the details. In general, people on Lemmy don’t spend very much time going into the vegan community and spreading hate and downvotes just for the sake of hatred, because they saw some vegans being vegans. Usually there’s some reason for it.

    One thing that definitely does happen is people from that minority community going out and picking fights with the wider community, and then beginning to make a whining sound when the reaction is negative, and claiming that the heat they’re getting is because of their viewpoint, and not because they’re being obnoxious. That happens quite a lot.

    I think some of the instances that police and ban dissent set up a bad expectation for their users. People from there feel like their tribe is being attacked if they have to come into contact a viewpoint that they’re been told is the “wrong” one, and then they make these blanket proclamations about how their own point of view is God’s truth while attacking anyone who disagrees, and then they sincerely don’t expect the hostile response that they get. I think some of them sincerely feel silenced when that happens. I don’t know what to do about that other than be transparent and supportive about where the door to being able to post is, if they want to go through it, and otherwise minimizing the amount that they can irritate everyone else for as long as that’s their MO.

    I still think that instead of the bot considering all of Lemmy as one community it would be better if moderators can provide focus for it because there are differences in values between instances and communities that I think should reflect in the moderation decisions that are taken.

    It definitely does that. It just uses a more sophisticated metric for “value” than a hard-coding of which are the good communities and which are the bad ones.

    I think the configuration options to give more weight or primacy to certain communities are still in the code. I’m not sure. I do see what you’re saying. I think it might be wise for me, if anyone does wind up wanting to play with this, to give as many tools as possible to moderators who want to use it, and just let them make the decision. I think the bot is capable of working without needing configuration which ones are the good communities, but if someone can replicate my checking into it, they’ll be happier with the outcome whether or not they wind up with the same conclusions as me.

    And yes, definitely making it advisory to the moderators, instead of its own autonomous AI drone banhammer, will increase people’s trust.