Safety Center

TIKTOK TRANSPARENCY REPORT

July 1, 2019 - December 31, 2019

Released: July 9, 2020


Introduction

TikTok is built upon the foundation of creative expression. We encourage our users to celebrate what makes them unique within a diverse and growing community that does the same. Feeling safe helps people feel comfortable expressing themselves openly, which allows creativity to flourish. This is why our top priority is promoting a safe and positive experience for everyone on TikTok. 

We published our first Transparency Report on December 30, 2019 and committed to regularly updating our community on how we're responsibly responding to data requests and protecting intellectual property. This time we're expanding the report to also provide insight into:                           

  • Our approach and policies to protect the safety of our community
  • How we establish and enforce our Community Guidelines
  • How we empower our community with tools and education
  • The volume of videos removed for violating our Community Guidelines 

In addition, we're starting to report content moderation metrics against nine of our content policy categories. At the end of last year we began rolling out a new content moderation infrastructure that enables us to be more transparent in reporting the reasons that violative videos were removed. In this report we're sharing the reasons videos were removed during the month of December, actioned upon through our new content moderation infrastructure. In subsequent reports, we'll be able to share this data for the full six month time period. 

We're committed to earning the trust of our community every day. We'll continue to evolve this report to address the feedback we're hearing from our users, policymakers and experts. Ultimately our goal is to keep TikTok an inspiring and joyful place for everyone.

TikTok is fully committed to the safety of our users

There's nothing more important to us than protecting the safety of our users. As a global platform, we have thousands of people across the markets where TikTok operates working to maintain a safe and secure app environment. We address problematic behavior and content through a combination of policies, technology, and moderation, which may include removing videos and banning accounts that violate our Community Guidelines or Terms of Service

Empowering our community with tools                                      

In addition to technology and moderation measures (more on that below), we have also put numerous controls directly into the hands of our users so that they can manage their own experience. These controls offer users an easy way to restrict who can engage with their content, set automatic comment filters, disable messages, or block another user. And if they come across something they think might violate our Community Guidelines, they can report it to our team directly from our app.

Account privacy                                    

TikTok offers a wide range of privacy settings that users can activate during account setup, or at any time. For instance, with a private account, only approved followers can view or comment on a user’s videos or send a direct message. Messaging can also be easily limited or turned off altogether, and the feature is disabled automatically for registered accounts under the age of 16. Additionally, users can remove a follower or block another user from a user from contacting them.

Additional content controls

Creators make their content and they deserve control over how others can interact with it. TikTok gives users robust account-level and video-specific options to customize their content settings, like limiting who can comment on or duet with a video they’ve created. Users also can enable comment filters by creating a custom list of keywords that will be automatically blocked from any comments on their videos. Alternatively, users can opt to disable comments on a specific video, restrict commenting to a select audience, or turn off comments on their videos altogether.

In addition, we actively educate users about their options through in-app safety and well-being videos, and at our Safety Center. For instance, we partnered with some of our top creators on a series of videos that encourage users to keep tabs on their screen time. Our Screen Time Management feature allows users to set a cap on how much time they'd like to spend on TikTok. A user also can choose to enable Restricted Mode which limits the appearance of content that may not be appropriate for all audiences. These features are always available in the digital well-being section of our app settings. 

Setting community expectations with our Community Guidelines

TikTok is a global community of people looking for an authentic, positive experience. Our commitment to the community starts with our policies, which are laid out in our Community Guidelines. Our Community Guidelines are an important code of conduct for a safe and friendly environment. We update these guidelines from time to time to protect our users from evolving trends and content that may be unsafe. They're meant to help foster trust, respect, and positivity for the TikTok community.

We trust all users to respect our Community Guidelines and keep TikTok fun and welcoming for everyone. Violation of these policies may result in having content or accounts removed.

Enforcing our policies                                 

Around the world, tens of thousands of videos are uploaded on TikTok every minute. With every video comes a greater responsibility on our end to protect the safety and well-being of our users. To enforce our Community Guidelines, we use a combination of technology and content moderation to identify and remove content and accounts that violate our guidelines. 

Technology

Technology is a key part of effectively enforcing our policies, and our systems are developed to automatically flag certain types of content that may violate our Community Guidelines. These systems take into account things like patterns or behavioral signals to flag potentially violative content, which allows us to take swift action and reduce potential harm. We regularly study evolving trends, academic learnings, and industry best practices to continually enhance our systems.

Content moderation                                       

Technology today isn't so advanced that we can solely rely on it to enforce our policies. For instance, context can be important when determining whether certain content, like satire, is violative. As such, our team of trained moderators helps to review and remove content that violates our standards. In some cases, this team proactively removes evolving or trending violative content, such as dangerous challenges or harmful misinformation.

Another way we moderate content is based on reports we receive from our users. We try to make it easy for users to flag potentially inappropriate content or accounts to us through our in-app reporting feature, which allows a user to choose from a list of reasons why they think something might violate our guidelines (such as violence or harm, harassment, or hate speech). If our moderators determine there's a violation, the content is removed.

Content actioned in H2 2019

In the second half of last year (July 1 - December 31, 2019), we removed 49,247,689 videos globally, which is less than 1% of all the videos our users created, for violating our Community Guidelines or Terms of Service. Our systems proactively caught and removed 98.2% of those videos before a user reported them. And of the total videos removed, 89.4% were taken down before they received any views. 

The chart below shows the 5 markets with the largest volumes of removed videos.  

Country/Market Total Removal
India 16,453,360
United States 4,576,888
Pakistan 3,728,162
United Kingdom 2,022,728
Russia 1,258,853

At the end of last year we started to roll out a new content moderation infrastructure that enables us to be more transparent in reporting the reasons that videos are removed from our platform. When a video violates our Community Guidelines, it's labeled with the policy or policies it violates and is taken down. This means the same video may appear across multiple policy categories. For the month of December 2019, when our new content moderation infrastructure began to be in effect, we're providing a breakdown of the policy category violations for videos removed under that new infrastructure.

During the month of December, 25.5% of the videos we took down fell under the category of adult nudity and sexual activities. Out of an abundance of caution for child safety, 24.8% of videos we removed violated our minor safety policies, which include content depicting harmful, dangerous, or illegal behavior by minors, like alcohol or drug use, as well as more serious content we take immediate action to remove, terminate accounts, and report to NCMEC and law enforcement as appropriate. Content containing illegal activities and regulated goods made up 21.5% of takedowns. In addition, 15.6% of videos removed violated our suicide, self-harm, and dangerous acts policy, which primarily reflects our removal of risky challenges. Of the remaining videos removed, 8.6% violated our violent and graphic content policy; 3% fell under our harassment and bullying policy; and less than 1% contained content that violated our policies on hate speech, integrity and authenticity, and dangerous individuals and organizations.

We've since transitioned the majority of our content review queues to our new content moderation system, and our subsequent reports will be able to include this detailed data for the full time period of each report.

Terminology and definitions                                                                   

When determining what content is appropriate for the TikTok community, we use these terms and definitions to guide our moderation strategy. We work with a range of trusted experts to help us understand the dynamic policy landscape and develop policies and moderation strategies to address problematic content and behaviors as they emerge. These include the eight individual experts on our U.S. Content Advisory Council, and organizations such as ConnectSafely.org, the National Center for Missing and Exploited Children, WePROTECT Global Alliance, and more.

  • Dangerous individuals and organizations: We do not allow dangerous individuals or organizations to use our platform to promote terrorism, crime, or other types of behavior that could cause harm. When there is a credible threat to public safety, we handle the issue by banning the account and work with relevant legal authorities as necessary and when appropriate.
  • Illegal activities and regulated goods: We prohibit the trade, sale, promotion, and use of certain regulated goods, as well as the depiction or promotion of criminal activities. Some content may be removed if it relates to activities or goods that are illegal or regulated in the majority of the region or world, even if the activities or goods in question are legal in the jurisdiction of posting. We allow exceptions for content that provides value to the public, such as educational, scientific, artistic, and newsworthy content.
  • Violent and graphic content: We do not allow content that is excessively gruesome or shocking, especially that promotes or glorifies abject violence or suffering. We do allow exceptions for certain circumstances, for example, content that is newsworthy or meant to raise awareness about issues. When we identify a genuine risk of violence or threat to public safety, we ban the account and work with relevant legal authorities as necessary and when appropriate.
  • Suicide, self-harm, and dangerous acts: We do not promote participation in activities that could lead to harm. We also do not permit users to encourage others to take part in dangerous activities. We do not allow content that promotes self-harm or suicide, but we do allow our users to share their experiences in order to raise awareness about these issues. 

    We work with industry experts around the world to strike the right balance in our moderation. If we come across material that indicates there may be an imminent danger of self-harm, TikTok may contact local emergency services to carry out a wellness check.
  • Hate speech: We do not tolerate content that attacks or incites violence against an individual or a group of individuals on the basis of protected attributes. We do not allow content that includes hate speech, and we remove it from our platform.
  • Harassment and bullying: Users should feel safe to express themselves without fear of being shamed, humiliated, bullied, or harassed. We deeply understand the psychological distress that abusive content can have on individuals, and we do not tolerate abusive content or behavior on our platform.
  • Adult nudity and sexual activities: We do not allow sexually explicit or gratifying content on TikTok, including animated content of this nature. Sexualized content carries many risks, such as triggering legal penalties in some jurisdictions and causing harm to our users through sharing non-consensual imagery (for example, revenge porn). Also, overtly sexual content can be offensive within certain cultures. We do allow exceptions around nudity and sexually explicit content for educational, documentary, scientific, or artistic purposes.
  • Minor safety: We are deeply committed to child safety and have zero tolerance for predatory or grooming behavior toward minors. We do not allow content that depicts or disseminates child abuse, child nudity, or sexual exploitation of children in both digital and real world format, and we report such content to relevant legal authorities. We also do not allow content depicting minors engaged in delinquent behavior.
  • Integrity and authenticity: Content that is intended to deceive or mislead any of our community members endangers our trust-based community. We do not allow such content on our platform. This includes activities such as spamming, impersonation, and disinformation campaigns.

Compliance with government requests

The sections below provide insight into the volume of legal requests we received from governments around the world during the second half of 2019 and how we responded. We honor requests made to us through the proper channels and where otherwise required by law. In limited emergency situations, TikTok will disclose user information without legal process, when we have reason to believe, in good faith, that the disclosure of information is required to prevent the imminent risk of death or serious physical injury to any person.

Legal requests for user information

Any information request we receive is carefully reviewed for legal sufficiency to determine, for example, whether the requesting entity is authorized to gather evidence in connection with a law enforcement investigation or to investigate an emergency involving imminent harm. For more on our policies and practices, please see our Law Enforcement Data Request Guidelines.                         

The following chart shows the number of information requests we received by country in the second half of 2019 (July 1 – December 31, 2019) and the rate with which we complied with those requests.

NOTE: TikTok did not receive any legal requests for account information from countries/markets other than those on the list above.

Government requests for content removal                                     

From time to time we receive requests from government agencies to remove content on our platform, such as requests around local laws prohibiting obscenity, hate speech, adult content, and more. We review all material in line with our Community Guidelines, Terms of Service, and applicable law, and take the appropriate action. If we believe that a report isn't legally valid or doesn't violate our standards, we may not action the content.

The following table shows the requests we received from legal entities in the second half of 2019 (July 1 – December 31, 2019) to remove or restrict content.

NOTE: TikTok did not receive any government requests to remove or restrict content from countries/markets other than those on the list above.

Takedowns for infringement of intellectual property

The creativity of our users is the fuel of TikTok. Our platform enables their self-expression to shine, and we do our best to protect it. 

Our Community Guidelines and Terms of Service prohibit content that infringes on third party intellectual property. We honor valid take-down requests based on violations of copyright law, such as the Digital Millennium Copyright Act (DMCA). Upon receiving an effective notice from a rights holder of potential intellectual property infringement, TikTok will remove the infringing content in a timely manner.

Any activity that infringes on the copyrights of others may lead to account suspension or removal. For more information on how we evaluate copyright infringement allegations, please see our Intellectual Property Policy

NOTE: Only copyright infringement take-down notices from copyright owners, their agencies, or attorneys are included in our copyrighted content take-down statistics.

What's Next

We’re proud of the strides we’re making to ensure that TikTok is an authentic, inclusive and safe community for anyone who wants to express their creative side. But we know we can do more. We welcome your feedback on this report; please contact us via transparencyreport [at] tiktok [dot] com. We look forward to sharing more information in our next report. 


Transparency report archives

Jul 1 - Dec 31, 2019
Jan 1 - Jun 30, 2019