July 1, 2020 – December 31, 2020 Released: February 24, 2021
About this report
TikTok is a diverse, global community fueled by creative expression. We work to maintain an environment where everyone feels safe and welcome to create videos, find community, and be entertained. We believe that feeling safe is essential to feeling comfortable expressing yourself authentically, which is why we strive to uphold our Community Guidelines by removing accounts and content that violate them. Our goal is for TikTok to remain a place for inspiration, creativity, and joy.
We are committed to being transparent about how our policies are enforced, because it helps build trust with our community and holds us accountable. We publish Transparency Reports to provide visibility into the volume and nature of content removed for violating our Community Guidelines or Terms of Service.
In the second half of 2020 (July 1 – December 31), 89,132,938 videos were removed globally for violating our Community Guidelines or Terms of Service, which is less than 1% of all videos uploaded on TikTok. Of those videos, we identified and removed 92.4% before a user reported them, 83.3% before they received any views, and 93.5% within 24 hours of being posted.
This chart shows the five markets with the largest volumes of removed videos.
Due to the pandemic, we continue to rely on technology to detect and automatically remove violating content in some markets, such as Brazil and Pakistan. Of the total videos removed, 8,295,164 were flagged and removed automatically for violating our Community Guidelines. Those videos are not reflected in the charts below.
TikTok offers creators the ability to appeal their video’s removal. When we receive an appeal, we review the video a second time and will reinstate it if it doesn’t violate our policies. Last half, we reinstated 2,927,391 videos after they were appealed. We aim to be consistent and equitable in our moderation and will continue our work to reduce false positives and provide ongoing education and training to our moderation team.
This chart shows the volume of videos removed by policy violation. A video may violate multiple policies and each violation is reflected in this chart.
Of the videos removed by our moderation team, the following chart shows the rate at which videos were proactively removed by policy reason. Proactive removal means detecting and removing a video before it’s reported. Removal within 24 hours means removing the video within 24 hours of it being posted on our platform. These numbers are understated as they do not include videos removed automatically by technology.
Proactive removal rate
Removal rate within 24 hours
Adult nudity and sexual activities
Harassment and bullying
Illegal activities and regulated goods
Integrity and authenticity
Suicide, self-harm, and dangerous acts
Violent and graphic content
Adult nudity and sexual activities We strive to create a platform that feels welcoming and safe, and we remove nudity and sexually explicit content. Of the videos removed, 20.5% violated this policy, which is down from 30.9% in the first half of 2020. One reason for this decrease is the result of improving our triage systems that separate adult nudity from minor nudity. We removed 88.3% of these videos before they were reported to us, and 90.6% were removed within 24 hours of being posted.
Harassment and bullying We believe in an inclusive community and individualized expression without fear of abuse and do not tolerate members of our community being shamed, bullied, or harassed. Of the videos we removed, 6.6% violated this policy, which is up from 2.5% in the first half of 2020. This increase reflects adjustments to policies around sexual harassment, threats of hacking, and targets of bullying statements, which are now more comprehensive. Additionally we saw modest improvements in our abilities to detect harassment or bullying proactively which still remains a challenge with linguistic and cultural nuances. Of these videos, 66.5% were removed before they were reported to us, and 84.1% were removed within 24 hours of being posted. We are committed to closing this gap and will keep our community updated as we make developments in this area.
Hateful behavior TikTok is a diverse and inclusive community that has no tolerance for hateful behavior.Last year we changed this policy from “hate speech” to its current name “hateful behavior” to take a more comprehensive approach to combatting hateful ideologies and off-platform activities. As a result, 2% of the videos we removed violated this policy, up from .8% in the first half of 2020.We have systems to detect hateful symbols, like flags and icons, but hate speech remains a challenge to proactively detect and we continue to make investments to improve.We removed 72.9% of hateful behavior videos before they were reported to us, though 83.5% were removed within 24 hours of being posted.
Illegal activities and regulated goods We work to ensure TikTok does not enable activities that violate laws or regulations, such as fraud or scam content, and 17.9% of the videos we removed violated this policy. This is a slight decrease from 19.6% in the first half of 2020. We attribute this to improvements related to our automation and detection systems as well as strengthened workstreams. Of these videos, 96.3% were removed before they were reported, and 94.8% were removed within 24 hours of being posted.
Integrity and authenticity We believe that trust forms the foundation of our community, and we do not allow content or accounts that involve fake engagement, impersonation, and misinformation. Of the videos removed, 2.4% violated this policy, up from 1.2% in the first half of 2020. We added fact-checking partners to additional markets and now have support in 16 languages. This helps us more accurately assess content and remove misinformation. We’ve also made improvements in our ability to detect and remove fake engagement and spam. Of the videos removed, 70.5% were removed before they were reported to us, and 91.3% were removed within 24 hours of being posted. We are investing in our infrastructure to improve our proactive detection, especially when it comes to identifying misinformation.
Minor safety We are deeply committed to the safety of minors and regularly strengthen this policy and our processes that help keep minors safe. For instance, we’ve expanded our harmful activities by minors policy to further remove content that depicts minors in possession of alcohol and tobacco products (both ingestion and possession are treated equally and will be removed) as well as other behavior that could put the well-being of minors at risk. In the second half of 2020, 36% of content removed violated our minor safety policy, up from 22.3% in the first half of 2020. Of those videos, 97.1% of videos were removed before they were reported to us, and 95.8% of videos were removed within 24 hours of being posted.
TikTok leverages PhotoDNA, a technology that helps identify and remove known child exploitation content, to protect against child sexual abuse material (CSAM), and we’ve continued to invest in our own systems that work to identify CSAM. These efforts have further improved our ability to remove and report content and accounts to the National Center for Missing & Exploited Children (NCMEC) and relevant legal authorities. As a result, we made 22,692 reports to NCMEC in 2020 compared to 596 in 2019.
Suicide, self-harm, and dangerous acts We care about the health and well-being of the individuals that make up our community. In the second half of 2020, we updated our policies on self-harm, suicide, and eating disorders to reflect feedback and language used by mental health experts. We also partnered with a number of organizations to support people who may be struggling by directing relevant searches and hashtags to emergency support. Of the videos removed, 6.2% violated these policies, which is a decrease from 13.4% in the first half of 2020, in part because we now remove content that shows risky behavior by minors under our minor safety policy.Of these videos, 94.4% were removed before they were reported, and 91.9% were removed within 24 hours of being posted.
Violent extremism We take a firm stance against enabling violence on or off TikTok. As we refreshed our Community Guidelines last fall, we clarified our previous “dangerous individuals and organizations” policy to more holistically address the challenge of violent extremism and specify what TikTok considers a threat or incitement to violence. Of all videos removed, 0.3% violated this policy, which is on par with content removed during the first half of 2020. Of these videos, 86.9% were removed before they were reported, and 89.4% were removed within 24 hours of being posted.
Violent and graphic content TikTok is a platform that celebrates creativity but not shock-value or violence. Of the total videos removed, 8.1% violated this policy compared to 8.7% in the first half of 2020. Of these videos, 93.2% of videos were removed before they were reported, and 92.7% were removed within 24 hours of being posted. For documentary purposes, we allow videos documenting violent protests, animals hunting in nature, and other such content to remain on our platform. As a result, we introduced opt-in viewing screens to enable people to have more control over the videos they watch.
In the second half of 2020, 6,144,040 accounts were removed for violating our Community Guidelines. On top of that, an additional 9,499,881 spam accounts were removed along with 5,225,800 spam videos posted by those accounts. We prevented 173,246,894 accounts from being created through automated means.
TikTok has strict policies to protect users from fake, fraudulent, or misleading content, including ads. Advertiser accounts and ad content are held to these policies and must follow our Community Guidelines, Advertising Guidelines, and Terms of Service. In the second half of 2020, we rejected 3,501,477 ads for violating advertising policies and guidelines. We are committed to creating a safe and positive environment for our users, and we regularly review and further strengthen our systems to combat ads that violate our policies.
We continually invest in our policies, products, and partnerships to support the overall health of our community. Here are some of the key updates we made in the second half of 2020.
Supporting community well-being
We strengthened our Community Guidelines to promote community well-being based on behavior we saw on platform and feedback we heard from our community and experts. For example, our updated policies on self-harm, suicide, and eating disorders reflect updated feedback and language provided by mental health experts. Our ad policies now ban ads for fasting apps and weight loss supplements. And we improved the way we notify users to help them understand why their video was removed.
TikTok partners with a range of organizations to support people who may be struggling with an eating disorder, self-harm behavior, or thoughts of suicide. Now, relevant searches and hashtags are redirected to emergency support where users can access free and confidential help. We also provide resources with evidence-based actions someone can take to improve their emotional well-being.
Supporting TikTok families
We regularly speak to parents, teens, and youth safety experts to develop meaningful ways for families to create the TikTok experience that’s right for them. In late 2020, we expanded our Family Pairing features to enable parents to set more guardrails on their teens’ content and privacy settings. From ways to restrict search, comments, and screen time, we hope these tools encourage families to have broader conversations about digital safety.