Safety Center

TIKTOK TRANSPARENCY REPORT

July 1, 2020 – December 31, 2020 

Released:  February 24, 2021


Introduction

TikTok is a diverse, global community fueled by creative expression. We work to maintain an environment where everyone feels safe and welcome to create videos, find community, and be entertained. We believe that feeling safe is essential to feeling comfortable expressing yourself authentically, which is why we strive to uphold our Community Guidelines by removing accounts and content that violate them. Our goal is for TikTok to remain a place for inspiration, creativity, and joy.

We are committed to being transparent about how our policies are enforced, because it helps build trust with our community and holds us accountable. We publish Transparency Reports to provide visibility into the volume and nature of content removed for violating our Community Guidelines or Terms of Service. We also report how we respond to law enforcement requests for information, government requests for content removals, and intellectual property removal requests. This report covers the second half of 2020 (July 1 - December 31) and includes additional information on our work to counter COVID-19 misinformation, maintain the integrity of our platform throughout global elections, and promote community well-being.

We welcome feedback or questions about this report. Please email us at transparency [at] tiktok [dot] com.


Community safety and well-being

We continually invest in our policies, products, and partnerships to support the overall health of our community. Here are some of the key updates we made in the second half of 2020.

Supporting community well-being

We strengthened our Community Guidelines to promote community well-being based on behavior we saw on platform and feedback we heard from our community and experts. For example, our updated policies on self-harm, suicide, and eating disorders reflect updated feedback and language provided by mental health experts. Our ad policies now ban ads for fasting apps and weight loss supplements. And we improved the way we notify users to help them understand why their video was removed.

TikTok partners with a range of organizations to support people who may be struggling with an eating disorder, self-harm behavior, or thoughts of suicide. Now, relevant searches and hashtags are redirected to emergency support where users can access free and confidential help. We also provide resources with evidence-based actions someone can take to improve their emotional well-being.

Supporting TikTok families

We regularly speak to parents, teens, and youth safety experts to develop meaningful ways for families to create the TikTok experience that's right for them. In late 2020, we expanded our Family Pairing features to enable parents to set more guardrails on their teens' content and privacy settings. From ways to restrict search, comments, and screen time, we hope these tools encourage families to have broader conversations about digital safety.


Protecting authenticity and platform integrity

At TikTok, we work diligently to protect the integrity of our platform and take multiple approaches to help authentic content thrive. This includes prohibiting activities or content that may undermine platform integrity, such as misinformation related to civic processes or public health. Misinformation is defined as content that is inaccurate or false. In the second half of 2020, we continued our work to support our community with authoritative information about elections, COVID-19, and vaccines while we also removed misinformation, such as elections and anti-vaccine misinformation. This section provides insight into the results of our efforts from July 1 - December 31, 2020.

COVID-19

TikTok continues to work with public health experts to help our community stay safe and informed on COVID-19 and vaccines. We make authoritative public health information available directly in our app – from our Discover page, on relevant search results, hashtags, and videos, and at our Safety Center. In our COVID-19 information hub, our community can find answers to common questions about the coronavirus and vaccines from the World Health Organization (WHO) and the Centers for Disease Control (CDC) as well as tips on staying safe.

In the second half of 2020, our COVID-19 information hub was viewed 2,625,049,193 times globally. Banners directing users to the hub were added to 3,065,213 videos and were viewed 63,685,788,567 times. We also added public service announcements (PSAs) to relevant COVID-19 and vaccine hashtags that direct users to the WHO and local public health resources, and these PSAs were viewed 38,010,670,666 times. As we provided access to information, we removed 51,505 videos in the second half of 2020 for promoting COVID-19 misinformation. Of those videos, 86% were removed before they were reported to us, 87% were removed within 24 hours of being uploaded to TikTok, and 71% had zero views.

2020 elections

Though politics and news make up a smaller amount of overall content on TikTok, and we don't accept paid political ads, we work to keep TikTok free of election misinformation and provide our community with access to authoritative information. In the second half of 2020, our teams worked to safeguard the integrity of elections globally. To further these efforts, we expanded our global fact-checking partnerships, worked closely with Electoral Commissions in multiple regions, developed product features to provide our users with authoritative electoral information, and improved our internal rapid response capabilities and processes.

In the US, we created a 2020 US elections guide with information about voting, the elections, and results from the National Association of Secretaries of State, the US Election Assistance Commission, The Associated Press, and other trusted organizations. The guide was accessible from our Discover page, on election-related search results, hashtags, and videos, as well as videos from verified political accounts. In total, our elections guide was visited 17,995,580 times.

PSAs were added to election-related hashtags and reminded people to follow our Community Guidelines, verify information, and report content. These hashtag PSAs were viewed 73,776,375,496 times. As the majority of content people see on TikTok comes through their For You feed (which recommends videos regardless of when they were posted) we added banners on 6,978,395 election-related videos which directed viewers to the elections guide where they could access up-to-date information and results regardless of when they viewed the video. These banners were viewed 37,708,973,828 times.

Our team of safety, security, policy, and operations experts work to detect and stop the spread of election misinformation and other content that violates our Community Guidelines. We prepared for 65 different scenarios, such as premature declarations of victory or disputed results, which helped us respond to emerging content appropriately and in a timely manner. Our teams are supported by automated technology that identifies and flags content for review as well as industry-leading threat intelligence platforms that escalate content emerging across the internet and on our platform.

In the second half of 2020, 347,225 videos were removed in the US for election misinformation, disinformation, or manipulated media. We worked with fact checkers at PolitiFact, Lead Stories, and SciVerify to assess the accuracy of content and limit distribution of unsubstantiated content. As a result, 441,028 videos were not eligible for recommendation into anyone's For You feed. We further removed 1,750,000 accounts that were used for automation during the timeframe of the US elections. While it's not known if any of the accounts were used specifically to amplify election related content, it was important to remove this set of accounts to protect the platform at this critical time.

Learnings

What we think worked

  1. Our proportionate focus on both foreign and domestic threats to our platform and overall elections integrity during the US 2020 elections was the right approach. We started our elections preparations in 2019 and built defenses based on industry learnings from the US 2016 elections, but we also prepared for more domestic activity based on trends we've observed on how misleading content is created and spread online. Indeed, during the US 2020 elections, we found that a significant portion of misinformation was driven by domestic users –– real people.
  2. We made the correct tooling investments that allowed us to quickly and meaningfully reduce the discoverability of disinformation and terms of incitement. We moved to quickly redirect misleading hashtags to our Community Guidelines instead of showing results, such as #sharpiegate #stopthesteal #patriotparty. This approach has also helped us combat QAnon content, though we continually must update our safeguards as content and terminology evolves.
  3. Prioritizing faster turnaround times for fact-checking helped us make informed and quick decisions on emerging content.
  4. Our investment in building relationships with a range of experts improved our overall approach to platform integrity, from policies to enforcement strategies to product experiences in our app. By leveraging an API from the National Association of Secretaries of State (NASS) using their nonpartisan canivote.org, we were able to connect users directly with their state’s official election websites. This enabled us to more confidently help users who wished to register to vote succeed and reduce confusion sometimes caused by third party sites. Working with the Election Integrity Partnership enabled us to access, share, and take action on emerging content in near real-time. Collaborating with researchers and non-profits increased transparency around our approach to platform integrity, which is why we launched a dedicated elections page on our Safety Center where we provided extensive detail on how our policies might apply in different scenarios and a daily log of material updates.

What we can improve

  1. We will keep improving our systems to proactively detect and flag misleading content for review. For instance, we can immediately detect known disinformation using our disinformation hashbank, and we're working to advance our models so that we can better identify altered versions of known disinformation.
  2. We will continue to develop our system that prevents repeat offenders from circumventing our enforcement decisions.
  3. More investment is needed to educate creators and brands on disclosure requirements for paid influencer content. TikTok does not allow paid political ads, and that includes content influencers are paid to create, and we expect our community to abide by our policies and FTC guidelines.
  4. We were proud of the in-app elections guide we developed with experts, and in the future we would launch it sooner in the elections process.

Community Guidelines enforcement

Videos

In the second half of 2020 (July 1 - December 31), 89,132,938 videos were removed globally for violating our Community Guidelines or Terms of Service, which is less than 1% of all videos uploaded on TikTok. Of those videos, we identified and removed 92.4% before a user reported them, 83.3% before they received any views, and 93.5% within 24 hours of being posted.

This chart shows the five markets with the largest volumes of removed videos.

Country / Market Total removal
United States 11,775,777
Pakistan 8,215,633
Brazil 7,506,599
Russia 4,574,690
Indonesia 3,860,156

Due to the pandemic, we continue to rely on technology to detect and automatically remove violating content in some markets, such as Brazil and Pakistan. Of the total videos removed, 8,295,164 were flagged and removed automatically for violating our Community Guidelines. Those videos are not reflected in the charts below.

TikTok offers creators the ability to appeal their video's removal. When we receive an appeal, we review the video a second time and will reinstate it if it doesn't violate our policies. Last half, we reinstated 2,927,391 videos after they were appealed. We aim to be consistent and equitable in our moderation and will continue our work to reduce false positives and provide ongoing education and training to our moderation team.

This chart shows the volume of videos removed by policy violation. A video may violate multiple policies and each violation is reflected in this chart.

Of the videos removed by our moderation team, the following chart shows the rate at which videos were proactively removed by policy reason. Proactive removal means detecting and removing a video before it's reported. Removal within 24 hours means removing the video within 24 hours of it being posted on our platform. These numbers are understated as they do not include videos removed automatically by technology.

Proactive removal rate Removal rate within 24 hours
Adult nudity and sexual activities 88.3% 90.6%
Harassment and bullying 66.5% 84.1%
Hateful behavior 72.9% 83.5%
Illegal activities and regulated goods 96.3% 94.8%
Integrity and authenticity 70.5% 91.3%
Minor safety 97.1% 95.8%
Suicide, self-harm, and dangerous acts 94.4% 91.9%
Violent and graphic content 93.2% 92.7%
Violent extremism 86.9% 89.4%

Adult nudity and sexual activities

We strive to create a platform that feels welcoming and safe, and we remove nudity and sexually explicit content. Of the videos removed, 20.5% violated this policy, which is down from 30.9% in the first half of 2020. One reason for this decrease is the result of improving our triage systems that separate adult nudity from minor nudity. We removed 88.3% of these videos before they were reported to us, and 90.6% were removed within 24 hours of being posted.

Harassment and bullying

We believe in an inclusive community and individualized expression without fear of abuse and do not tolerate members of our community being shamed, bullied, or harassed. Of the videos we removed, 6.6% violated this policy, which is up from 2.5% in the first half of 2020. This increase reflects adjustments to policies around sexual harassment, threats of hacking, and targets of bullying statements, which are now more comprehensive. Additionally we saw modest improvements in our abilities to detect harassment or bullying proactively which still remains a challenge with linguistic and cultural nuances. Of these videos, 66.5% were removed before they were reported to us, and 84.1% were removed within 24 hours of being posted. We are committed to closing this gap and will keep our community updated as we make developments in this area.

Hateful behavior

TikTok is a diverse and inclusive community that has no tolerance for hateful behavior. Last year we changed this policy from "hate speech" to its current name "hateful behavior" to take a more comprehensive approach to combatting hateful ideologies and off-platform activities. As a result, 2% of the videos we removed violated this policy, up from .8% in the first half of 2020. We have systems to detect hateful symbols, like flags and icons, but hate speech remains a challenge to proactively detect and we continue to make investments to improve. We removed 72.9% of hateful behavior videos before they were reported to us, though 83.5% were removed within 24 hours of being posted.

Illegal activities and regulated goods

We work to ensure TikTok does not enable activities that violate laws or regulations, such as fraud or scam content, and 17.9% of the videos we removed violated this policy. This is a slight decrease from 19.6% in the first half of 2020. We attribute this to improvements related to our automation and detection systems as well as strengthened workstreams. Of these videos, 96.3% were removed before they were reported, and 94.8% were removed within 24 hours of being posted.

Integrity and authenticity

We believe that trust forms the foundation of our community, and we do not allow content or accounts that involve fake engagement, impersonation, and misinformation. Of the videos removed, 2.4% violated this policy, up from 1.2% in the first half of 2020. We added fact-checking partners to additional markets and now have support in 16 languages. This helps us more accurately assess content and remove misinformation. We've also made improvements in our ability to detect and remove fake engagement and spam. Of the videos removed, 70.5% were removed before they were reported to us, and 91.3% were removed within 24 hours of being posted. We are investing in our infrastructure to improve our proactive detection, especially when it comes to identifying misinformation.

Minor safety

We are deeply committed to the safety of minors and regularly strengthen this policy and our processes that help keep minors safe. For instance, we've expanded our harmful activities by minors policy to further remove content that depicts minors in possession of alcohol and tobacco products (both ingestion and possession are treated equally and will be removed) as well as other behavior that could put the well-being of minors at risk. In the second half of 2020, 36% of content removed violated our minor safety policy, up from 22.3% in the first half of 2020. Of those videos, 97.1% of videos were removed before they were reported to us, and 95.8% of videos were removed within 24 hours of being posted.

TikTok leverages PhotoDNA, a technology that helps identify and remove known child exploitation content, to protect against child sexual abuse material (CSAM), and we've continued to invest in our own systems that work to identify CSAM. These efforts have further improved our ability to remove and report content and accounts to the National Center for Missing & Exploited Children (NCMEC) and relevant legal authorities. As a result, we made 22,692 reports to NCMEC in 2020 compared to 596 in 2019.

Suicide, self-harm, and dangerous acts

We care about the health and well-being of the individuals that make up our community. In the second half of 2020, we updated our policies on self-harm, suicide, and eating disorders to reflect feedback and language used by mental health experts. We also partnered with a number of organizations to support people who may be struggling by directing relevant searches and hashtags to emergency support. Of the videos removed, 6.2% violated these policies, which is a decrease from 13.4% in the first half of 2020, in part because we now remove content that shows risky behavior by minors under our minor safety policy. Of these videos, 94.4% were removed before they were reported, and 91.9% were removed within 24 hours of being posted.

Violent extremism

We take a firm stance against enabling violence on or off TikTok. As we refreshed our Community Guidelines last fall, we clarified our previous "dangerous individuals and organizations" policy to more holistically address the challenge of violent extremism and specify what TikTok considers a threat or incitement to violence. Of all videos removed, 0.3% violated this policy, which is on par with content removed during the first half of 2020. Of these videos, 86.9% were removed before they were reported, and 89.4% were removed within 24 hours of being posted.

Violent and graphic content

TikTok is a platform that celebrates creativity but not shock-value or violence. Of the total videos removed, 8.1% violated this policy compared to 8.7% in the first half of 2020. Of these videos, 93.2% of videos were removed before they were reported, and 92.7% were removed within 24 hours of being posted. For documentary purposes, we allow videos documenting violent protests, animals hunting in nature, and other such content to remain on our platform. As a result, we introduced opt-in viewing screens to enable people to have more control over the videos they watch.

Accounts

In the second half of 2020, 6,144,040 accounts were removed for violating our Community Guidelines. On top of that, an additional 9,499,881 spam accounts were removed along with 5,225,800 spam videos posted by those accounts. We prevented 173,246,894 accounts from being created through automated means.

Ads

TikTok has strict policies to protect users from fake, fraudulent, or misleading content, including ads. Advertiser accounts and ad content are held to these policies and must follow our Community Guidelines, Advertising Guidelines, and Terms of Service. In the second half of 2020, we rejected 3,501,477 ads for violating advertising policies and guidelines. We are committed to creating a safe and positive environment for our users, and we regularly review and further strengthen our systems to combat ads that violate our policies.


Legal requests

The sections below provide insight into the volume and types of legal requests we received in the second half of 2020 and how we responded. We receive legal requests from governments and law enforcement agencies around the world and from IP rights holders. We honor requests made to us through the proper channels and where otherwise required by law.

Law enforcement requests for user information

TikTok is committed to complying with valid law enforcement requests while respecting the privacy and rights of our users. To obtain non-public user information, law enforcement must provide the appropriate legal documents required for the type of information being sought, such as a subpoena, court order, or warrant, or submit an emergency request. Any information request we receive is carefully reviewed for legal sufficiency to determine, for example, whether the requesting entity is authorized to gather evidence in connection with a law enforcement investigation or to investigate an emergency involving imminent harm.

In limited emergency situations, TikTok will disclose user information without legal process. This happens when we have reason to believe, in good faith, that the disclosure of information is required to prevent the imminent risk of death or serious physical injury to any person. For more on our policies and practices, please see our Law Enforcement Data Request Guidelines.

This chart shows the volume and nature of requests for user information we received during the second half of 2020 (July 1 – December 31) and the rate with which we complied.

Country / Market Legal request Emergency request Total requests Total accounts specified Percentage of legal requests where data was produced Percentage of emergency requests where data was produced
Albania 1 0 1 5 0% 0%
Argentina 0 3 3 2 0% 33%
Australia 4 8 12 20 25% 75%
Austria 3 0 3 3 0% 0%
Bangladesh 4 0 4 3 0% 0%
Belgium 0 1 1 10 0% 100%
Brazil 3 4 7 4 33% 50%
Canada 0 40 40 43 0% 92%
Chile 3 0 3 3 0% 0%
Colombia 1 0 1 1 0% 0%
Czech Republic 1 0 1 1 0% 0%
Denmark 2 0 2 2 0% 0%
Finland 2 4 6 6 0% 100%
France 19 9 28 47 0% 89%
Germany 45 25 70 103 0% 80%
Greece 4 6 10 9 0% 83%
Hong Kong 1 0 1 0 0% 0%
Hungary 0 1 1 1 0% 100%
India 103 1 104 103 62% 0%
Ireland 1 0 1 1 0% 0%
Israel 0 73 73 75 0% 81%
Italy 11 5 16 13 0% 60%
Japan 2 10 12 12 50% 90%
Jordan 2 1 3 3 0% 0%
Lebanon 0 1 1 1 0% 0%
Luxembourg 1 0 1 1 0% 0%
Malta 9 0 9 19 0% 0%
Mexico 2 4 6 7 0% 25%
Nepal 9 1 10 12 0% 100%
Netherlands 0 5 5 4 0% 80%
New Zealand 2 0 2 2 0% 0%
Norway 1 4 5 5 0% 75%
Pakistan 7 8 15 15 14% 0%
Paraguay 0 1 1 1 0% 100%
Philippines 0 2 2 2 0% 0%
Poland 2 2 4 2 0% 50%
Qatar 2 0 2 3 0% 0%
Romania 0 1 1 1 0% 100%
Russia 12 2 14 14 67% 100%
Serbia 1 0 1 1 0% 0%
Singapore 21 0 21 29 72% 0%
South Africa 0 1 1 1 0% 100%
South Korea 3 0 3 5 0% 0%
Spain 11 0 11 9 9% 0%
Sri Lanka 2 0 2 2 0% 0%
Sweden 2 2 4 4 0% 100%
Switzerland 6 1 7 7 0% 100%
Taiwan 1 0 1 1 0% 0%
The United Arab Emirates 0 3 3 4 0% 67%
Turkey 1 0 1 1 0% 0%
United Kingdom 49 42 91 93 12% 76%
United States 409 137 546 560 83% 81%
Uzbekistan 0 1 1 1 0% 100%


Government requests for content restrictions

When we receive requests from government agencies to restrict or remove content on our platform in accordance with local laws, we review all material in line with our Community Guidelines, Terms of Service, and applicable law, and take the appropriate action. If we believe that a request isn't legally valid or doesn't violate our standards, we may restrict the availability of the reported content in the country where it is alleged to be illegal or we may take no action.

This chart shows the requests we received from government agencies in the second half of 2020 (July 1 – December 31) to remove or restrict content and the rate with which we complied.

Country / Market Government Requests Total Accounts Specified Accounts Removed or Restricted Content Removed or Restricted
Armenia 4 20 0 8
Australia 32 98 74 16
Brazil 1 1 1 1
Bangladesh 1 1 1 0
Belgium 1 1 1 0
Canada 4 4 4 0
Cyprus 1 1 0 0
Egypt 1 1 1 1
Estonia 1 6 6 0
Finland 1 1 1 0
France 9 20 6 23
Germany 1 1 1 0
Indonesia 2 3 0 26
Israel 15 15 10 10
Iceland 1 1 1 1
Japan 2 0 0 8
Malaysia 2 2 2 0
New Zealand 4 4 4 0
Norway 27 61 56 10
Nepal 15 15 11 14
Pakistan 97 50 24 14263
Russia 135 375 94 429
Sri Lanka 10 11 6 4
Sweden 2 2 1 1
Thailand 10 1 0 24
Turkey 13 14 6 16
Taiwan 1 39 21 0
Vietnam 2 2 1 0
United Arab Emirates 2 4 1 25
United Kingdom 4 4 4 0
United States 5 5 4 0
Uzbekistan 6 98 34 82


Intellectual property removal requests

The creativity of our users is the fuel of TikTok. Our platform enables their self-expression to shine, and we do our best to protect it. Our Community Guidelines and Terms of Service prohibit content that infringes on third party intellectual property. We honor valid take-down requests based on violations of copyright law and trademark law. Upon receiving an effective notice from a rights holder of potential intellectual property infringement, TikTok will remove the alleged infringing content in a timely manner. Any activity that infringes on the intellectual property rights of others may lead to account suspension or removal. For more information on how we evaluate intellectual property infringement allegations, please see our Intellectual Property Policy.

This chart shows the copyright and trademark content take-down notices we processed in the second half of 2020 (July 1 - December 31) and the rate at which we removed content.

Copyright content take-down notices

Country / Market Date range Total copyright take-down reports Successful copyright take-down reports Percentage of successful reports
United States & Canada July 1 - July 12, 2020 107 88 82.24%
Rest of the World July 1 - Aug 12, 2020 6131 1574 25.67%

NOTE: Only copyright infringement take-down notices from copyright owners, their agencies, or attorneys are included in the above take-down statistics.

Country / Market Date range Total copyright take-down reports Successful copyright take-down reports Percentage of successful reports
United States & Canada July 13 - Dec 31, 2020 9290 1830 19.70%
Rest of the World Aug 13 - Dec 31, 2020 25394 11626 45.78%

NOTE: Due to tooling limitations, all copyright infringement take-down notices are included in the above take-down statistics.


Trademark content take-down notices

Country / Market Date range Total trademark take-down reports Successful trademark take-down reports Percentage of successful reports
United States & Canada July 1 - July 12, 2020 37 9 24.30%
Rest of the World July 1 - Aug 12, 2020 438 72 16.44%

NOTE: Only trademark infringement take-down notices from trademark owners, their agencies, or attorneys are included in the above take-down statistics.

Country / Market Date range Total trademark take-down reports Successful trademark take-down reports Percentage of successful reports
United States & Canada July 13 - Dec 31, 2020 1002 253 25.20%
Rest of the World Aug 13 - Dec 31, 2020 1026 122 11.89%

NOTE: Due to tooling limitations, all trademark infringement take-down notices are included in the above take-down statistics.


Appendix

Terminology and definitions

When determining what content is appropriate for the TikTok community, we use these terms and definitions to guide our moderation strategy. We work with a range of experts to help us understand the dynamic policy landscape and develop policies and moderation strategies to address problematic content and behaviors as they emerge.

  • Violent extremism: We take a firm stance against enabling violence on or off TikTok. We do not allow people to use our platform to threaten or incite violence, or to promote dangerous individuals or organizations. When there is a threat to public safety or an account is used to promote or glorify off-platform violence, we may suspend or ban the account. When warranted, we will report threats to relevant legal authorities. To effectively protect our community, we may consider information available on other platforms and offline to identify violent and extremist individuals and organizations on our platform. If we find such individuals or organizations on TikTok, we will close their accounts.
  • Illegal activities and regulated goods: We work to ensure TikTok does not enable activities that violate laws or regulations. We prohibit the trade, sale, promotion, and use of certain regulated goods, as well as the depiction, promotion, or facilitation of criminal activities, including human exploitation. Content may be removed if it relates to activities or goods that are regulated or illegal in the majority of the region or world, even if the activities or goods in question are legal in the jurisdiction of posting.
  • Violent and graphic content: TikTok is a platform that celebrates creativity but not shock-value or violence. We do not allow content that is gratuitously shocking, graphic, sadistic, or gruesome or that promotes, normalizes, or glorifies extreme violence or suffering on our platform. When there is a threat to public safety, we suspend or ban the account and, when warranted, we will report it to relevant legal authorities.
  • Suicide, self-harm, and dangerous acts: We care deeply about the health and well-being of the individuals that make up our community. We do not allow content depicting, promoting, normalizing, or glorifying activities that could lead to suicide, self-harm, or eating disorders. We also do not permit users to share content depicting them partaking in, or encouraging others to partake in, dangerous activities that may lead to serious injury or death.
  • Hateful behavior: TikTok is a diverse and inclusive community that has no tolerance for discrimination. We do not permit content that contains hate speech or involves hateful behavior and we remove it from our platform. We suspend or ban accounts that engage in hate speech violations or which are associated with hate speech off the TikTok platform.
  • Harassment and bullying: We believe in an inclusive community and individualized expression without fear of abuse. We do not tolerate members of our community being shamed, bullied, or harassed. Abusive content or behavior can cause severe psychological distress and will be removed from our platform.
  • Adult nudity and sexual activities: We strive to create a platform that feels welcoming and safe. We do not allow nudity, pornography, or sexually explicit content on our platform. We also prohibit content depicting or supporting non-consensual sexual acts, the sharing of non-consensual intimate imagery, and adult sexual solicitation.
  • Minor safety: We are deeply committed to ensuring the safety of minors on our platform. We do not tolerate activities that perpetuate the abuse, harm, endangerment, or exploitation of minors on TikTok. Any content, including animation or digitally created or manipulated media, that depicts abuse, exploitation, or nudity of minors is a violation on our platform and will be removed when detected. When warranted, we report violative content to the National Center for Missing & Exploited Children (NCMEC) and/or other relevant legal authorities. TikTok considers a minor any person under the age of 18.
  • Integrity and authenticity: We believe that trust forms the foundation of our community. We do not allow activities that may undermine the integrity of our platform or the authenticity of our users. We remove content or accounts that involve spam or fake engagement, impersonation, misleading information that causes harm, or that violate any intellectual property rights.


Transparency report archives

July 1 - December 31, 2020
January 1 - June 30, 2020
July 1 - December 31, 2019
January 1 - June 30, 2019