Protecting children’s online privacy is paramount. Online platforms seek to enhance child privacy protection by implementing new classification systems into their content moderation practices. One prominent example is YouTube’s “made for kids” (MFK) classification. However, traditional content moderation focuses on managing content rather than users’ privacy; little is known about how users experience these classification systems. Thematically analyzing online discussions about YouTube’s MFK classification system, we present a case study on content creators’ and consumers’ experiences. We found that creators and consumers perceived MFK classification as misaligned with their actual practices, creators encountered unexpected consequences of practicing labeling, and creators and consumers identified MFK classification’s intersections with other platform designs. Our findings shed light on an interwoven network of multiple classification systems that extends the original focus on child privacy to encompass broader child safety issues; these insights contribute to the design principles of child-centered safety within this intricate network.