Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.

  • Shiri Bailem@foggyminds.com
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    @mudeth @pglpm The grey area is all down to personal choices and how “fascist” your admin is (which goes on to which instance is best for you?)

    Defederation is a double-edged sword, because if you defederate constantly for frivolous reasons all you do is isolate your node. This is also why it’s the *final* step in moderation.

    The reality is that it’s a whole bunch of entirely separate environments and we’ve walked this path well with email (the granddaddy of federated social networks). The only moderation we can perform outside of our own instance is to defederate, everything else is just typical blocking you can do yourself.

    The process here on Mastodon is to decide for yourself what is worth taking action on. If it’s not your instance, you report it to the admin of that instance and they decide if they want to take action and what action to take. And if they decide it’s acceptable, you decide whether or not this is a personal problem (just block the user or domain on in your user account but leave it federating) or if it’s a problem for your whole server (in which case you defederate to protect your users).

    Automated action is bad because there’s no automated identity verification here and it’s an open door to denial of service attacks (harasser generates a bunch of different accounts, uses them all the report a user until that user is auto-suspended).

    The backlog problem however is an intrinsic problem to moderation that every platform struggles with. You can automate moderation, but then that gets abused and has countless cases of it taking action on harmless content, and you can farm out moderation but then you get sloppiness.

    The fediverse actually helps in moderation because each admin is responsible for a group of users and the rest of the fediverse basically decides whether they’re doing their job acceptably via federation and defederation (ie. if you show that you have no issue with open Nazis on your platform, then most other instances aren’t going to want to connect to you)

    • mudeth@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Defederation is a double-edged sword

      Agreed. It’s not the solution.

      The reality is that it’s a whole bunch of entirely separate environments and we’ve walked this path well with email

      On this I disagree. There are many fundamental differences. Email is private, while federated social media is public. Email is one-to-one primarily, or one-to-few. Soc media is broadcast style. The law would see it differently, and the abuse potential is also different. @faeranne@lemmy.blahaj.zone also used e-mail as a parallel and I don’t think that model works well.

      The process here on Mastodon is to decide for yourself what is worth taking action on.

      I agree for myself, but that wouldn’t shield a lay user. I can recommend that a parent sign up for reddit, because I know what they’ll see on the frontpage. Asking them to moderate for themselves can be tricky. As an example, if people could moderate content themselves we wouldn’t have climate skeptics and holocaust deniers. There is an element of housekeeping to be done top-down for a platform to function as a public service, which is what I assume Lemmy wants to be.

      Otherwise there’s always the danger of it becoming an wild-west platform that’ll attract extremists more than casual users looking for information.

      Automated action is bad because there’s no automated identity verification here and it’s an open door to denial of service attacks

      Good point.

      The fediverse actually helps in moderation because each admin is responsible for a group of users and the rest of the fediverse basically decides whether they’re doing their job acceptably via federation and defederation

      The way I see it this will inevitably lead to concentration of users, defeating the purpose of federation. One or two servers will be seen as ‘safe’ and people will recommend that to their friends and family. What stops those two instances from becoming the reddit of 20 years from now? We’ve seen what concentration of power in a few internet companies has done to the Internet itself, why retread the same steps?

      Again I may be very naive, but I think with the big idea that is federation, what is sorely lacking is a robust federated moderation protocol.