Child Safety Meets Creator Moderation
As online platforms evolve, moderation remains paramount, not just for filtering inappropriate content but also to prevent children from being exposed to targeted ads and unjust data tracking. Historically, research efforts have concentrated on general child safety with a limited exploration into the direct implications of child safety for creators and audiences, especially concerning regulations such as the Children’s Online Privacy Protection Act (COPPA). The first exploratory study, currently under review, on YouTube’s “made for kids” (MFK) classification brought to light creators’ and audience’s challenges in protecting children’s data privacy, with many perceiving the system as inconsistent. Delving into these discussions, we identified intertwined classification systems that could hinder the effectiveness of content moderation and child privacy protection, thus revealing a need for enhanced design and policy interventions.
In our ongoing survey study, we’re probing parental mediation in children’s online safety across platforms. This initiative seeks to juxtapose parents’ definitions of harmful content against platform policies and discern which platforms, parental control features, and content categories parents deem essential for child safety.
Another ongoing study of participatory design workshops seeks to support creators to support child safety across multiple platforms. Leveraging insights from our prior study on cross-platform creators published in CHI 2023, this study will gather content creators, moderators, policy experts, and parents. The aim is clear: ensure content safety for children, support creators without breaching policies, and understand how best to implement these moderation policies. Our work underscores the nuanced contextualization of online child safety in creator moderation. Our findings will present pivotal policy and design recommendations as we progress, supporting the broader narrative on children’s online safety.