TikTok Transparency Report
January 1, 2020 – June 30, 2020
Released: September 22 , 2020
Hundreds of millions of people around the world come to TikTok for entertainment, self-expression, and connection. We have no higher priority than helping promote a safe app experience that fosters joy and belonging among our growing global community. We can't do that without transparency.
TikTok is striving to be the most transparent and accountable company in the industry when it comes to how we are keeping our users safe. We have put actions behind our words by making our content moderation, algorithms, and privacy and security practices available at our global Transparency and Accountability Centers. We publish these Transparency Reports to provide insight into the volume and nature of content removed for violating our Community Guidelines or Terms of Service, and how we respond to law enforcement requests for information, government requests for content removals, and copyrighted content take-down notices. We also recap the improvements we have made over the first half of 2020 to enhance safety and promote positivity on our platform.
We are proud of the progress we have made to increase visibility in these areas, and we plan to keep providing more information in each report. We welcome your feedback or questions about this report. Please email us at transparency [at] tiktok [dot] com.
Promoting safety and community on TikTok
We made important strides during the first half of 2020 to establish a number of policies, products, and partnerships designed to promote safety, accountability, and positivity in the app.
We started 2020 by releasing an expanded set of policies in our Community Guidelines to promote an uplifting and welcoming app environment. This included new policies on misleading content to prevent the spread of misinformation and disinformation as well as more nuanced policies in areas like hate speech to foster inclusion. We also made public commitments to promote and support the Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse and the EU Code of Practice on Disinformation.
Promoting our policies
To support our policies, we introduced a way for users to easily report different kinds of misinformation directly to our team for review. We launched a fact-checking program across eight markets to help us verify misleading content, such as misinformation about the novel coronavirus, elections, and climate change. We also introduced in-app educational PSAs on hashtags related to important topics in the public discourse, such as the elections, Black Lives Matter, and harmful conspiracies, including QAnon.
Connecting users to authoritative and educational content
We continued our work to educate users about our policies and safety features through in-app safety videos. To promote digital literacy, we launched a Youth Portal where teens and their families can learn about internet safety and the tools and controls built into TikTok. We also created TikTok-style videos with popular creators to educate our community about media literacy and misinformation.
We prioritized building experiences that connect our users to authoritative and supportive content across a range of topics. When the novel coronavirus emerged, we promoted authoritative content through in-app informational pages and hosted educational livestreams and hashtag challenges with the World Health Organization, the International Federation of Red Cross, UNICEF India, and popular voices for public health and science, including Bill Nye the Science Guy and Prince's Trust. To support our Black community, we developed dedicated pages in our app where users could learn about Black history and social justice organizations while celebrating Black creators and artists during Black Music Month.
Enhancing youth safety
TikTok launched a suite of industry-leading youth safety features that allow parents and teens to customize their TikTok experience based on individual needs. Through our Family Pairing feature, parents are able to connect their TikTok account to their teens' to guide the type of content available to their teen, promote healthy screentime habits, and decide message settings. TikTok automatically disables direct messaging for registered accounts under the age of 16, and we don't allow any user to send private images as a preventative measure against harmful and abusive content.
We value feedback and input from experts, non-profits, and others, because it makes our policies, products, and safeguards stronger and more comprehensive for our community. Over the first half of 2020, TikTok formed a number of important global partnerships with leading safety organizations. This includes a significant partnership with the National Center for Missing and Exploited Children and WePROTECT Global Alliance to do our part to combat child sexual exploitation and abuse material. In India, we partnered with the Data Security Council of India to increase awareness for safe practices online. And in the US, we launched our Content Advisory Council of experts on hate speech, inclusive AI, youth safety, and more as well as our Creator Diversity Collective which brings people from different backgrounds together to help ensure diversity, inclusion, and representation in our programs and on our platform.
Community Guidelines enforcement
In the first half of 2020 (January 1 - June 30), 104,543,719 videos were removed globally for violating our Community Guidelines or Terms of Service, which is less than 1% of all videos uploaded on TikTok. Of those videos, we found and removed 96.4% of videos before a user reported them, and 90.3% were removed before they received any views.
This chart shows the volume of videos removed by policy violation. When a video violates our Community Guidelines, it's labeled with the policy or policies it violates and is removed. This means the same video may be counted in multiple policy categories.
As a result of the coronavirus pandemic, we relied more heavily on technology to detect and automatically remove violating content in markets such as India, Brazil, and Pakistan. Of the total videos removed, 10,698,297 were flagged and removed automatically for violating our Community Guidelines. Those videos are not reflected in the chart above.
This chart shows the five countries/markets with the largest volumes of removed videos.
The sections below provide insight into the volume and types of legal requests we received in the first half of 2020 and how we responded. We receive legal requests from governments and law enforcement agencies around the world or from IP rights holders. We honor requests made to us through the proper channels and where otherwise required by law.
Law enforcement requests for user information
TikTok is committed to complying with valid law enforcement requests while respecting the privacy and rights of our users. To obtain non-public user information, law enforcement must provide the appropriate legal documents required for the type of information being sought, such as a subpoena, court order, or warrant, or submit an emergency request. Any information request we receive is carefully reviewed for legal sufficiency to determine, for example, whether the requesting entity is authorized to gather evidence in connection with a law enforcement investigation or to investigate an emergency involving imminent harm.
In limited emergency situations, TikTok will disclose user information without legal process. This happens when we have reason to believe, in good faith, that the disclosure of information is required to prevent the imminent risk of death or serious physical injury to any person. For more on our policies and practices, please see our Law Enforcement Data Request Guidelines.
This chart shows the volume and nature of requests for user information we received during the first half of 2020 (January 1 – June 30, 2020) and the rate with which we complied.
Government requests for content restrictions
From time to time we receive requests from government agencies to restrict or remove content on our platform in accordance with local laws. We review all material in line with our Community Guidelines, Terms of Service, and applicable law, and take the appropriate action. If we believe that a request isn't legally valid or doesn't violate our standards, we may restrict the availability of the reported content in the country where it is alleged to be illegal or we may take no action.
This chart shows the requests we received from government agencies in the first half of 2020 (January 1 – June 30, 2020) to remove or restrict content and the rate with which we complied.
Infringement of intellectual property removals
The creativity of our users is the fuel of TikTok. Our platform enables their self-expression to shine, and we do our best to protect it. Our Community Guidelines and Terms of Service prohibit content that infringes on third party intellectual property. We honor valid take-down requests based on violations of copyright law, such as the Digital Millennium Copyright Act (DMCA). Upon receiving an effective notice from a rights holder of potential intellectual property infringement, TikTok will remove the infringing content in a timely manner. Any activity that infringes on the copyrights of others may lead to account suspension or removal. For more information on how we evaluate copyright infringement allegations, please see our Intellectual Property Policy.
This chart shows the copyrighted content take-down notices we received and the rate at which we removed content.
Terminology and definitions
When determining what content is appropriate for the TikTok community, we use these terms and definitions to guide our moderation strategy. We work with a range of trusted experts to help us understand the dynamic policy landscape and develop policies and moderation strategies to address problematic content and behaviors as they emerge. These include the nine individual experts on our U.S. Content Advisory Council, and organizations such as the National PTA, the National Center for Missing and Exploited Children, WePROTECT Global Alliance, and more.
- Dangerous individuals and organizations: We do not allow dangerous individuals or organizations to use our platform to promote terrorism, crime, or other types of behavior that could cause harm. When there is a credible threat to public safety, we handle the issue by banning the account and work with relevant legal authorities as necessary and when appropriate.
- Illegal activities and regulated goods: We prohibit the trade, sale, promotion, and use of certain regulated goods, as well as the depiction or promotion of criminal activities. Some content may be removed if it relates to activities or goods that are illegal or regulated in the majority of the region or world, even if the activities or goods in question are legal in the jurisdiction of posting. We allow exceptions for content that provides value to the public, such as educational, scientific, artistic, and newsworthy content.
- Violent and graphic content: We do not allow content that is excessively gruesome or shocking, especially that promotes or glorifies abject violence or suffering. We do allow exceptions for certain circumstances, for example, content that is newsworthy or meant to raise awareness about issues. When we identify a genuine risk of violence or threat to public safety, we ban the account and work with relevant legal authorities as necessary and when appropriate.
- Suicide, self-harm, and dangerous acts: We do not promote participation in activities that could lead to harm. We also do not permit users to encourage others to take part in dangerous activities. We do not allow content that promotes self-harm or suicide, but we do allow our users to share their experiences in order to raise awareness about these issues.
- We work with industry experts around the world to strike the right balance in our moderation. If we come across material that indicates there may be an imminent danger of self-harm, TikTok may contact local emergency services to carry out a wellness check.
- Hate speech: We do not tolerate content that attacks or incites violence against an individual or a group of individuals on the basis of protected attributes. We do not allow content that includes hate speech, and we remove it from our platform.
- Harassment and bullying: Users should feel safe to express themselves without fear of being shamed, humiliated, bullied, or harassed. We deeply understand the psychological distress that abusive content can have on individuals, and we do not tolerate abusive content or behavior on our platform.
- Adult nudity and sexual activities: We do not allow sexually explicit or gratifying content on TikTok, including animated content of this nature. Sexualized content carries many risks, such as triggering legal penalties in some jurisdictions and causing harm to our users through sharing non-consensual imagery (for example, revenge porn). Also, overtly sexual content can be offensive within certain cultures. We do allow exceptions around nudity and sexually explicit content for educational, documentary, scientific, or artistic purposes.
- Minor safety: We are deeply committed to child safety and have zero tolerance for predatory or grooming behavior toward minors. We do not allow content that depicts or disseminates child abuse, child nudity, or sexual exploitation of children in both digital and real world format, and we report such content to relevant legal authorities. We also do not allow content depicting minors engaged in delinquent behavior.
- Integrity and authenticity: Content that is intended to deceive or mislead any of our community members endangers our trust-based community. We do not allow such content on our platform. This includes activities such as spamming, impersonation, and disinformation campaigns.