“I’m not sure what difference is between their content and mine, other than the person itself”: A Study of Fairness Perception of Content Moderation on YouTube

Image credit: Unsplash

Abstract

How social media platforms could fairly conduct content moderation is gaining attention from society at large. Researchers from HCI and CSCW have investigated whether certain factors could affect how users perceive moderation decisions as fair or unfair. However, little attention has been paid to unpacking or elaborating on the formation processes of users’ perceived (un)fairness from their moderation experiences, especially users who monetize their content. By interviewing 21 for-profit YouTubers (i.e., video content creators), we found three primary ways through which participants assess moderation fairness, including equality across their peers, consistency across moderation decisions and policies, and their voice in algorithmic visibility decision-making processes. Building upon the findings, we discuss how our participants’ fairness perceptions demonstrate a multi-dimensional notion of moderation fairness and how YouTube implements an algorithmic assemblage to moderate YouTubers. We derive transferable design considerations for a fairer moderation system on platforms affording creator monetization.

Publication
PACM on Human Computer Interaction, CSCW2
Renkai Ma
Renkai Ma
HCI Researcher focusing on HCI, social computing, trust & safety

I use human‑centered desgin approaches and mixed methods to study platform moderation with users, moderators, and policy experts to direct better design and policy‑making changes for online communities.