Skip to main content
Maintaining platform integrity

Combating harmful misinformation

TikTok is where over a billion people create and share content on topics that matter to them. We work hard to maintain a safe, authentic space where people can discover content that is original and engage with people who are authentic. This post explains how we encourage authentic expression on TikTok by protecting our community from harmful misinformation, partnering with experts (including our Global Fact-checking Program) to respond to evolving misinformation trends, and empowering our community with authoritative information.


Protecting people from harmful content

In a global community, it is natural for people to have different opinions, but when it comes to topics that impact peoples’ safety, we seek to operate on a shared set of facts and reality. Our Community Guidelines prohibit inaccurate, misleading, or false content that may cause significant harm to individuals or society—regardless of the poster’s intent. This includes not only physical harm, but also societal harm, like the undermining of trust in elections or public health initiatives.

Our misinformation policies

We have robust policies around specific types of misinformation like medical, climate change, and election misinformation, as well as misleading AI-generated content, conspiracy theories, and public safety issues like natural disasters. In keeping with our Community Principles (which embody our human rights commitment to protecting expression while preventing harm), our policies outline a range of enforcement outcomes, which are proportionate to how much harm the content can cause. This includes removing content or reducing its reach by making it ineligible for For You feeds. We continuously consult with experts and our global Safety Advisory Councils to ensure our approach is updated, balanced, and respectful of local nuance.

Transparency around content moderation is critical. When we take action on someone’s content for violating our misinformation policies, we let them know through an Inbox notification as well as the “Account Status” page in our Safety Center, where they can also file an appeal.

Disinformation” vs. misinformation

Our misinformation policies apply to content regardless of the poster’s intent, as the content’s harm is the same either way. Hence, they cover both “disinformation” (which is intentionally shared to mislead) and harmful misinformation that may not have been shared with the goal of deceiving people.

We also address disinformation by removing accounts that repeatedly post misinformation that violates our policies, and have expert teams who continuously monitor for disinformation campaigns, inauthentic behavior, and influence operations. Learn more about their work here.

Like others in our industry, we do not prohibit people from sharing personal experiences, simply inaccurate myths, or misinformation that could cause reputational or commercial harm, in order to balance creative expression with preventing harm.

Detecting violative content

We detect misinformation through automated technology, user reports, and proactive intelligence briefings from experts and our fact-checking partners. When high-risk events like elections, public safety emergencies or natural disasters are unfolding, we launch additional detection efforts locally, often collaborating with experts or authorities and establishing reporting channels for authoritative local partners to better stay ahead of evolving misinformation. When new misinformation claims are discovered, we proactively check our platform for any similar content.

Unverified content

Information evolves rapidly and sometimes it’s not clear whether a claim is true or false. When we can’t verify whether content is true or not, we may label it as “unverified” and reduce its spread by making it ineligible for For You feeds. This includes content about unfolding events where details are still emerging. We assess the accuracy of content by partnering with independent, International Fact-Checking Network-accredited fact checking organizations through our Global Fact-Checking Program.

Integrity & Authenticity moderators

While some misinformation can be enforced through technology alone—for example, repetitions of previously debunked content—misinformation evolves quickly and is highly nuanced. That’s why we have dedicated teams and processes for Integrity and Authenticity policies like misinformation, including enhanced training, tooling and expertise. When potential misinformation is detected, our Integrity & Authenticity moderators must determine whether it’s accurate, false, or unverifiable in order to apply our policies. To do so, they consult a global database of previously fact-checked claims and then route any new, evolving, or borderline claims to our Global Fact-Checking Program for independent evaluation. Learn more about the program below.

Partnering with experts

TikTok partners with experts across the world to support consistent and accurate moderation, understand local context, and empower our community with authoritative information.

TikTok’s Global Fact-checking Program

When potential misinformation is detected, our Integrity and Authenticity moderators must determine whether it’s accurate, false, or unverifiable in order to apply our policies. To do so, they consult a global database of previously fact-checked claims and then route any new, evolving, or borderline claims to our Global Fact-Checking Program for independent evaluation.

Through our Global Fact-Checking Program, we partner closely with 18 IFCN-accredited fact-checking organizations who assess the accuracy of content on TikTok in over 50 languages and 100 countries across the world. These partners do not moderate content directly on TikTok, but assess whether a claim is true, false, or unsubstantiated so that our moderators can take the right action based on our Community Guidelines. They also share intelligence that helps us detect harmful misinformation and anticipate misinformation trends.

Here’s how our Global Fact-Checking Program works:

  1. We detect potential harmful misinformation. In order to apply our Community Guidelines, we must confirm that it is indeed misinformation.
  2. To do so, moderators consult our database of previously fact-checked claims. If the content in question represents a new or evolving claim, they route it to our fact-checking partners for assessment.
    • While the content is being fact-checked, we may make it ineligible for the For You feed out of an abundance of caution.
  3. Once fact-checkers have assessed the content, our moderators apply our Community Guidelines accordingly.
    • If content is determined to be accurate, it stays on the platform and is eligible for For You feeds (as long as it doesn’t violate any other Community Guidelines).
    • If content is confirmed as harmful misinformation that violates our Community Guidelines, our moderators apply our policies and either remove the video or restrict its reach.
    • If content can’t be verified after fact-checking, we may label it as unverified and make it ineligible for For You feeds to reduce its reach.

When choosing our fact-checking partners, we prioritize organizations that are IFCN-certified, employ locally-based journalists, and specialize in assessing misinformation on social media and in short-form videos. Many of our fact-checking partners cover several countries, which helps us scale the program to support global moderation. We currently work with: Agence France-Presse (AFP), Animal Político, Australian Associated Press (AAP), Code for Africa, dpa Deutsche Presse-Agentur, Demagog, Estadão Verifica, Facta, Lead Stories, Logically Facts, Newschecker, Newtral, Poligrafo, PolitiFact, Reuters, Science Feedback, and Teyit.

Global Fact-Checking coverage:

Leaning into local nuance

Online misinformation is a shared global challenge, but it manifests differently across countries, languages, and communities. We’re continuously partnering with both creators and local experts like advocacy organizations, public health authorities, electoral commissions, and local fact-checking experts (in addition to our Global Fact-Checking Program) to reduce misinformation, elevate authoritative information, and make media literacy skills accessible to all of our global communities.

Empowering communities with reliable information

In addition to taking action on the content itself, we continuously work to deter misinformation proactively by empowering our community with media literacy resources that help them recognize misinformation, assess content critically, and file reports about violative content. These include:

  • For some content that’s unverified and potentially harmful, we prompt viewers to reconsider before sharing it.
  • In addition to our unverified content labels, we provide a labeling tool for AI-generated content and blue “verified” checks to confirm the authenticity of notable accounts.
  • For topics that can be vulnerable to misinformation—like health, elections, or unfolding crises—we direct searches towards authoritative information and add informational banners to relevant hashtag pages.
  • We add informational banners to LIVE content.
  • We label state-affiliated media to help viewers better understand sources behind content.

AI-generated content

Artificial Intelligence (AI) enables incredible creative opportunities, but can potentially confuse or mislead viewers if they’re not aware content was generated or edited with AI. As more creators take advantage of AI to enhance their creativity, we want to support transparent and responsible content creation practices by investing in media literacy initiatives that empower creativity while giving people context about content they’re viewing.

Our policies on AI-generated content

Our Synthetic and Manipulated Media policy requires creators to label AI-generated content that shows realistic scenes, and we were the first platform to build a tool for creators that helps them do this easily. We also prohibit AI-generated content that contains the likeness of any real private figure, including anyone under 18, as well as synthetic media of public figures if the content is used for endorsements or violates any other policy. To increase clarity about AI-powered TikTok products, all TikTok effects that are significantly edited with AI must include “AI” in their name and corresponding effects label.

Evolving detection to keep pace with AI

As AI-generated content and technology evolves, we continue to evolve our methods of detection. This includes launching and iterating new detection methods for AI-generated content as well as assessing options for partnerships to determine the origin of content. We also continue to partner closely with experts. In February 2023, TikTok was a launch partner to the Partnership on AI’s Framework for Responsible Practices for Synthetic Media, a new code of industry best practices that promotes transparency and responsible innovation around AI-generated content.

Was this helpful?

thumps upYesthumps downNo