Trust & Safety Series: Content Moderation
In our final Trust & Safety Creator Series post, we examine Passes’ comprehensive content moderation approach while sharing creator perspectives on how these safeguards impact their platform experience.

For our final blog post in this Trust & Safety Creator Series, we're talking all about content moderation. At Passes, we carefully balance enforcement of content moderation policies with creator empowerment over their content.
Passes Content Moderation
Passes has strict policies prohibiting explicit content on the platform and we focus on proactively scanning content posted by creators to create a safe passage for all users.
We have two layers of moderation: AI and human review. To prevent undesirable content on our site and protect our users, we utilize three AI tools: Amazon Rekognition Content Moderation, Hive Moderation, and Microsoft PhotoDNA. Then, for the second layer, our Trust & Safety team review flagged content according to our classification standards. Read more about it here.
Our end-goal for content moderation is always to safeguard the wellbeing of all users on our platform and uphold our community ideals. However, we understand that content moderation can be a challenge for creators.

To ensure that creators do not feel this frustration with our content moderation policies, we lead with a transparent policy focus. We also recently hosted a live webinar about our content moderation policies, with an article summary and the recording here.
Conclusion
We hope you enjoyed this Trust & Safety Creator Series! We're always here to support your journey on Passes.
If you want to learn more about our trust and safety processes or need help troubleshooting an issue, check out our Help Center, where you can talk to our support chatbot or read up on our resources.