Skip to main content
Community Guidelines Enforcement

Community Guidelines Enforcement Report

July 1, 2019 – December 31, 2019
Released: July 9, 2020

About this report

TikTok is built upon the foundation of creative expression. We encourage our users to celebrate what makes them unique within a diverse and growing community that does the same. Feeling safe helps people feel comfortable expressing themselves openly, which allows creativity to flourish. This is why our top priority is promoting a safe and positive experience for everyone on TikTok.

We published our first Transparency Report on December 30, 2019 and committed to regularly updating our community on how we’re responsibly responding to data requests and protecting intellectual property. This time we’re expanding the report to also provide insight into:

  • Our approach and policies to protect the safety of our community
  • How we establish and enforce our Community Guidelines
  • How we empower our community with tools and education
  • The volume of videos removed for violating our Community Guidelines

In addition, we’re starting to report content moderation metrics against nine of our content policy categories. At the end of last year we began rolling out a new content moderation infrastructure that enables us to be more transparent in reporting the reasons that violative videos were removed. In this report we’re sharing the reasons videos were removed during the month of December, actioned upon through our new content moderation infrastructure. In subsequent reports, we’ll be able to share this data for the full six month time period.

We’re committed to earning the trust of our community every day. We’ll continue to evolve this report to address the feedback we’re hearing from our users, policymakers and experts. Ultimately our goal is to keep TikTok an inspiring and joyful place for everyone.

Latest data

Videos

In the second half of last year (July 1 – December 31, 2019), we removed 49,247,689 videos globally, which is less than 1% of all the videos our users created, for violating our Community Guidelines or Terms of Service. Our systems proactively caught and removed 98.2% of those videos before a user reported them. And of the total videos removed, 89.4% were taken down before they received any views.

The chart below shows the 5 markets with the largest volumes of removed videos.

Country/MarketTotal removal
India16,453,360
United States4,576,888
Pakistan3,728,162
United Kingdom2,022,728
Russia1,258,853

At the end of last year we started to roll out a new content moderation infrastructure that enables us to be more transparent in reporting the reasons that videos are removed from our platform. When a video violates our Community Guidelines, it’s labeled with the policy or policies it violates and is taken down. This means the same video may appear across multiple policy categories. For the month of December 2019, when our new content moderation infrastructure began to be in effect, we’re providing a breakdown of the policy category violations for videos removed under that new infrastructure.

During the month of December, 25.5% of the videos we took down fell under the category of adult nudity and sexual activities. Out of an abundance of caution for child safety, 24.8% of videos we removed violated our minor safety policies, which include content depicting harmful, dangerous, or illegal behavior by minors, like alcohol or drug use, as well as more serious content we take immediate action to remove, terminate accounts, and report to NCMEC and law enforcement as appropriate. Content containing illegal activities and regulated goods made up 21.5% of takedowns. In addition, 15.6% of videos removed violated our suicide, self-harm, and dangerous acts policy, which primarily reflects our removal of risky challenges. Of the remaining videos removed, 8.6% violated our violent and graphic content policy; 3% fell under our harassment and bullying policy; and less than 1% contained content that violated our policies on hate speech, integrity and authenticity, and dangerous individuals and organizations.

We’ve since transitioned the majority of our content review queues to our new content moderation system, and our subsequent reports will be able to include this detailed data for the full time period of each report.

Overview

There’s nothing more important to us than protecting the safety of our users. As a global platform, we have thousands of people across the markets where TikTok operates working to maintain a safe and secure app environment. We address problematic behavior and content through a combination of policies, technology, and moderation, which may include removing videos and banning accounts that violate our Community Guidelines or Terms of Service.

Empowering our community with tools

In addition to technology and moderation measures (more on that below), we have also put numerous controls directly into the hands of our users so that they can manage their own experience. These controls offer users an easy way to restrict who can engage with their content, set automatic comment filters, disable messages, or block another user. And if they come across something they think might violate our Community Guidelines, they can report it to our team directly from our app.

Account privacy

TikTok offers a wide range of privacy settings that users can activate during account setup, or at any time. For instance, with a private account, only approved followers can view or comment on a user’s videos or send a direct message. Messaging can also be easily limited or turned off altogether, and the feature is disabled automatically for registered accounts under the age of 16. Additionally, users can remove a follower or block another user from a user from contacting them.

Additional content controls

Creators make their content and they deserve control over how others can interact with it. TikTok gives users robust account-level and video-specific options to customize their content settings, like limiting who can comment on or duet with a video they’ve created. Users also can enable comment filters by creating a custom list of keywords that will be automatically blocked from any comments on their videos. Alternatively, users can opt to disable comments on a specific video, restrict commenting to a select audience, or turn off comments on their videos altogether.

In addition, we actively educate users about their options through in-app safety and well-being videos, and at our Safety Center. For instance, we partnered with some of our top creators on a series of videos that encourage users to keep tabs on their screen time. Our Screen Time Management feature allows users to set a cap on how much time they’d like to spend on TikTok. A user also can choose to enable Restricted Mode which limits the appearance of content that may not be appropriate for all audiences. These features are always available in the digital well-being section of our app settings.

Setting community expectations with our Community Guidelines

TikTok is a global community of people looking for an authentic, positive experience. Our commitment to the community starts with our policies, which are laid out in our Community Guidelines. Our Community Guidelines are an important code of conduct for a safe and friendly environment. We update these guidelines from time to time to protect our users from evolving trends and content that may be unsafe. They’re meant to help foster trust, respect, and positivity for the TikTok community.

We trust all users to respect our Community Guidelines and keep TikTok fun and welcoming for everyone. Violation of these policies may result in having content or accounts removed.

Enforcing our policies

Around the world, tens of thousands of videos are uploaded on TikTok every minute. With every video comes a greater responsibility on our end to protect the safety and well-being of our users. To enforce our Community Guidelines, we use a combination of technology and content moderation to identify and remove content and accounts that violate our guidelines.

Technology

Technology is a key part of effectively enforcing our policies, and our systems are developed to automatically flag certain types of content that may violate our Community Guidelines. These systems take into account things like patterns or behavioral signals to flag potentially violative content, which allows us to take swift action and reduce potential harm. We regularly study evolving trends, academic learnings, and industry best practices to continually enhance our systems.

Content moderation

Technology today isn’t so advanced that we can solely rely on it to enforce our policies. For instance, context can be important when determining whether certain content, like satire, is violative. As such, our team of trained moderators helps to review and remove content that violates our standards. In some cases, this team proactively removes evolving or trending violative content, such as dangerous challenges or harmful misinformation.

Another way we moderate content is based on reports we receive from our users. We try to make it easy for users to flag potentially inappropriate content or accounts to us through our in-app reporting feature, which allows a user to choose from a list of reasons why they think something might violate our guidelines (such as violence or harm, harassment, or hate speech). If our moderators determine there’s a violation, the content is removed.

Other reports

Was this helpful?

thumps upYesthumps downNo