Hi

I am a computer science student and am just starting my masters thesis. My focus will be on content moderation (algorithms) and therefore I am currently exploring how some social media applications moderate content.

If I understand the docs correctly, content moderation on mastodon is all manual labor? I haven’t read anything about automatic detection of Child Sexual Abuse Material (CSAM) for example which is a thing that most centralised platforms seem to do.

Another question which kind of goes in the same direction is reposting of already moderated content. For example a racist meme that was posted before. Are there any measures in place to detect this?

Thank you for your help!

  • never_obeyOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Thank you for your reply. Neat that there are already APIs which technically would enable some kind of automated moderation. But I understand that automation is critical. Especially since ML Algorithms have problems with context.