Skip to main content
Keeping people safe

Our approach to content moderation

Millions of people around the world come to TikTok to create, share, discover and connect, and we’re committed to maintaining a safe, inclusive and authentic environment for our growing community. More than 40,000 safety professionals are dedicated to keeping TikTok safe.


Setting the rules for a safe TikTok experience

Our Community Guidelines establish a set of norms and common code of conduct that help us maintain a safe and inclusive environment for our community, where genuine interactions and authentic content are encouraged. Our policies are developed by experts from a variety of disciplines and we strive to enforce these rules equitably, consistently and fairly. We regularly review and update our Community Guidelines to evolve alongside new behaviours and risks. Our goal is to create a safe and entertaining experience for our diverse community.

Enforcing our Community Guidelines

To protect our community and platform, we remove content and accounts that violate our Community Guidelines. Creators are notified with the removal reason and provided a way to appeal the removal. Additionally, content we have deemed as not appropriate for a general audience over age 13 is ineligible for recommendation into the For You feed. To enforce our Community Guidelines effectively, we deploy a combination of technology and moderators.

Automated moderation technology

Videos uploaded to TikTok are initially reviewed by our automated moderation technology, which aims to identify content that violates our Community Guidelines. These systems look at a variety of signals across content, including keywords, images, titles, descriptions and audio. If no violation is identified, it will be available to view on the platform. If a potential violation is found, the automated moderation system will either pass it on to our safety teams for further review or remove it automatically if there is a high degree of confidence that the content violates our Community Guidelines. This automated removal is applied when violations are most clear-cut, such as nudity or youth safety. We continue to invest in improving the precision of our automated moderation systems so we can more effectively remove violative content at scale while also reducing the number of incorrect removals; if users believe we have made a mistake, they can appeal the removal of their content.

Content moderators

In order to support fair and consistent review of potentially violative content, moderators work alongside our automated moderation systems and take into account additional context and nuance which may not always be picked up by technology. We moderate content in more than 70 languages, with specialised moderation teams for complex issues, such as misinformation.

Human review also helps improve our automated moderation systems by providing feedback for the underlying machine learning models to strengthen future detection capabilities. This continuous improvement helps to reduce the volume of potentially distressing videos that moderators view and enables them to focus more on content that requires a greater understanding of context and nuance.

The responsibilities of content moderators include:

  • Reviewing content flagged by technology: When our automated moderation systems identify potentially problematic content but cannot make an automated decision to remove it, they send the content to our safety teams for further review. To support this work, we’ve developed technology that can identify risky or suspicious items – for example, weapons – in video frames, so that content moderators can carefully review the video and the context in which it appears. This technology improves the efficiency of our moderators by helping them more adeptly identify violative images or objects, quickly recognise violations, and make decisions accordingly.
  • Reviewing reports from our community: We offer our community easy-to-use in-app and online reporting tools so they can flag any content or account they feel is in violation of our Community Guidelines. While these reports are important, the vast majority of removed content is identified proactively before it receives any views or is reported to us.
  • Reviewing popular content: Harmful content has the potential to rapidly gain popularity and pose a threat to our community. In order to reduce this risk, our automated moderation systems may send videos with a high number of views to our content moderators for further review against our Community Guidelines.
  • Assessing appeals: If someone disagrees with our decision to remove their content or account, they can file an appeal for reconsideration. These appeals will be sent to content moderators to decide if the content should be allowed back onto the platform or the account reinstated.

Promoting a caring working environment for trust and safety professionals

We strive to promote a caring working environment for employees, and trust and safety professionals especially. We use an evidence-based approach to develop programs and resources that support moderators’ psychological well-being.

Our primary focus is on preventative care measures to minimise the risk of psychological injury through well-timed support, training and tools, from recruitment through to onboarding and ongoing employment, that help foster resilience while minimising the risk of psychological injury. These may include tools and features to allow employees to control exposure to graphic content when reviewing or moderating content, including grayscaling, muting and blurring; training for managers to help them identify when a team member may need additional well-being support; and clinical and therapeutic support.

We also provide our trust and safety employees with membership to the Trust and Safety Professional Association (TSPA). This membership allows them to access resources for career development, participate in workshops and events, and connect with a network of peers across the industry.

Partnering for success

We recognise that one of the best ways to protect our community is through ongoing conversation and collaboration with a variety of experts and groups specialising in areas such as family safety, wellness, digital literacy, fact-checking, misinformation, disinformation, and online security.

We have also established regional Safety Advisory Councils, which bring together independent online safety experts to help us develop forward-looking Community Guidelines and features that not only address the challenges of today but also prepare us to face future industry issues.

Our global Community Partner Channel provides selected organizations an additional route for reporting content that they believe breaks our Community Guidelines so that it can be reviewed by our teams. To date, more than 400 organizations who specialize in a range of safety issues use our Community Partner Channel. As explained above, anyone can report content in-app for review.

These critical partnerships help us improve how we enforce our Community Guidelines, enhance our platform’s safety and privacy controls, and expand our educational resources, as we strive to promote a safe, inclusive and authentic environment for our community.

Was this helpful?

thumps upYesthumps downNo