I made a robot moderator. It models trust flow through a network that’s made of voting patterns, and detects people and posts/comments that are accumulating a large amount of “negative trust,” so to speak.
In its current form, it is supposed to run autonomously. In practice, I have to step in and fix some of its boo-boos when it makes them, which happens sometimes but not very often.
I think it’s working well enough at this point that I’d like to experiment with a mode where it can form an assistant to an existing moderation team, instead of taking its own actions. I’m thinking about making it auto-report suspect comments, instead of autonomously deleting them. There are other modes that might be useful, but that might be a good place to start out. Is anyone interested in trying the experiment in one of your communities? I’m pretty confident that at this point it can ease moderation load without causing many problems.
Two things:
You’ve accused them of being hostile here, and of arguing elsewhere.
This very post by you comes across as hostile to me.
Paradigm is everything, and here you are attempting to be the arbiter if what’s acceptable.
You’ve also made your own bias clear by labelling someone as “coming from lemmy.ml”. You’re attacking the person from the start.
Try not to be hypocritical.
All I can think about is how this bot is immediately a non-starter because this is the kind of attitude I can expect from the author when asking for support or collaboration. It’s not just in this post, either.
Even if the parent comment here was hostile–it’s borderline, at worst–I can’t possibly understand the mentality of being argumentative in a post trying to encourage the use of a service.
Your 1-star review is noted. When I open a Yelp page for the bot, I’ll be sure to let you know, and you can speak to my manager about it.
I had the same reaction. I’d like to see this graph for OP. (And also a sentiment analysis of their comments.)
You can give me a sentiment analysis, if you want to, you have my profile.