2024 US Elections Hub

US Elections Integrity Hub

Our Approach

Elections are important moments of community conversation. During this time, we continue to invest in keeping people safe and protecting the integrity of the TikTok platform for our 170M+ community members in the US.

Our team is made up of multi-disciplinary experts in democracy, elections, civil society, security, and technology who are working tirelessly to protect TikTok and stay ahead of evolving threats. We do this by enforcing robust policies aimed at preventing the spread of misinformation, elevating reliable election information, and collaborating with internal and external experts who help us evaluate and improve our approach on an ongoing basis.

We've developed this page to provide a centralized view into everything that we're doing to maintain the integrity of our platform in the lead up to and through the 2024 US presidential election. This includes an overview of the relevant Community Guidelines that we rely on to maintain civility and integrity around political discourse, a snapshot into key features that we've launched that help us enforce our Community Guidelines, and a running microblog that provides visibility into our latest efforts.

This page provides a comprehensive update on everything we are doing to protect users during the 2024 US Election:

Countering harmful misinformation and hateful content
  • Our policies prohibit harmful misinformation and hate speech. We detect and take action against this content by using a combination of technology and specialized moderators, who we arm with enhanced training and access to tools to cover specific topics like misinformation, hate and violent behaviors.
  • We also prohibit false or misleading information about how to vote, how to register to vote, the eligibility qualifications for candidates, and the procedures that govern implementation of elections. We work to detect and take action against this content by using a combination of technology and human moderators.
  • We partner with 19 IFCN-accredited fact-checking organizations who assess the accuracy of content so our moderators can apply our policies.
  • When content is unverified, we may label it, reduce its reach by making it ineligible for the "For You" feeds, and prompt people to reconsider sharing it to help prevent potential misinformation from spreading.
  • Learn more about our policies here and our efforts to stay open here and our efforts to stay accountable here.
Connecting our community to reliable information about elections
  • We partner with electoral commissions and fact-checking organizations to build Election Centers that connect people to trustworthy information about voting, and that reached over 55 million people globally in 2023.
  • This year, we launched an updated US Elections Center in partnership with nonprofit Democracy Works, providing our US community members with reliable voting information for all 50 states and Washington, DC.
  • We direct people to the Elections Center through prompts on relevant election content and searches, and continue to add information throughout the year, including election results.
Seeking to deter influence networks and attempts to manipulate our platform
  • Our policies prohibit attempts to engage in covert influence operations by manipulating our platform and/or harmfully misleading our community.
  • Our approach focuses on assessing accounts’ behavior and looking for evidence of linkages between them, their actions or their techniques to determine if they are coordinating together to misrepresent who they are and what they are doing.
  • When we identify this behavior, we ban the accounts, take action on others that we assess as part of the network, and report them regularly in our transparency center. Learn more about our policies here.
Requiring transparency about AI-generated content and prohibiting AIGC that's misleading
  • Our policies prohibit AI-generated content that contains the likeness of a public figure if the content is used for endorsements or violates any other policy, and require creators to label AI-generated content that shows realistic scenes.
  • All TikTok effects that are significantly edited with AI must include “AI” in their name, and the corresponding effects will be automatically labeled. This includes any AI-generated content that contains realistic images, audio, and video.
  • In May, we became the first social media or video sharing platform to implement the Coalition for Content Provenance and Authenticity's (C2PA) Content Credentials technology, which enables us to automatically label some AI-generated content that was made on other platforms.
  • TikTok was a launch partner to the Partnership on AI’s Framework for Responsible Practices for Synthetic Media, a new code of industry best practices that promotes transparency and responsible innovation around AI-generated content. Learn more about our policies here.
Prohibiting paid promotion of political content
  • We’ve long prohibited paid promotion of political content on TikTok because we don’t believe that this kind of advertising is conducive to the inclusive, authentic and creative experience we strive to provide for our community.
  • Specifically, we don’t allow paid political promotion, political advertising, or fundraising by politicians and political parties, including both traditional paid ads or creators receiving compensation to support or oppose a candidate for office.

Ongoing Work

We're providing running updates on the continuous work we do to protect TikTok during the elections.

  • September 30, 2024: Over the last week, actions we've taken to advance transparency and protect TikTok throughout the U.S. elections include:
    • Updating our Election Center with 8 videos from partners including Factchequado, MediaSmarts and MediaWise that share tips on spotting, fact checking, and reporting online misinformation.
    • Convening over 60 external leaders from civil society, academia, government, and industry, for TikTok-hosted event "Informed and Empowered: Digital Literacy for the Global Future." Suzy Loftus, Head of Trust & Safety for TikTok U.S. Data Security, hosted a panel about empowering digital literacy with Dr. Renata Dwan, Special Adviser, Office of the UN Secretary-General's Envoy on Technology, Angie Drobnic Holan, Director of International Fact Checking Network (IFCN), and Lisa Remillard, journalist and TikTok creator "The News Girl".
    • Publishing our Q2 2024 Community Guidelines Enforcement Report which provides quarterly data about our global content moderation efforts across TikTok.
  • September 23, 2024: Over the last week, our work to protect TikTok during the US elections has included:
    • Removing accounts associated with Rossiya Segodnya and TV-Novosti for engaging in covert influence operations on TikTok which violates our Community Guidelines. Previously, these accounts were restricted in the EU and UK. Globally, under our state-affiliated media policy, their content was also ineligible for the For You feed to limit attempts to influence foreign audiences on topics of global events and affairs, and their accounts were labeled as state-controlled media to provide important context about the source of the content. The removed accounts will be reported in our September Covert Influence Operations report
    • Publishing our August 2024 covert influence operation report, which is where we disclose information about the covert influence operations we disrupt to promote transparency and accountability. We removed 5 networks in August 2024, along with 7,792 accounts associated with networks that we disrupted and disclosed in previous reports.
  • September 11, 2024: Over the last week, our work to protect TikTok during the US elections has included:
    • Adding a countdown timer and new resources to our US Election Center, to help people keep track of how many days are left before voting day. Learn more in our newsroom about how we're expanding our Election Center.
    • Teaming up with peers to support a new AI Literacy initiative from the National Association for Media Literacy Education. Valiant Richey, Global Head of Outreach & Partnerships at TikTok, said: "As members of the Content Authenticity Initiative and inaugural supporters of the Partnership on AI's responsible AI framework, TikTok is committed to advancing AI literacy so people can explore this technology safely. We're proud to sponsor NAMLE's effort to bring AI literacy to more audiences as we continue investing in leading AI labeling tools, provenance technologies and educational initiatives that help raise awareness around AI transparency." Read more in NAMLE's press release.
    • Permanently banning accounts that were deceptively distributing Russian state media messaging among US audiences in violation of our policies. Following new evidence published by the Department of Justice and a review of the accounts' behavior on our platform, we removed three accounts representing a media company, its founder, and a faked news outlet so far for violating our policies prohibiting deceptive behavior and paid political promotion. We will continue to investigate new information as it becomes available.
    • Following the US presidential debate, we have been proactively monitoring and taking action on content that harmfully misrepresents events that happened in the debate or otherwise violates our Community Guidelines. Learn more about relevant election policies in the "Our Policies" section below.
  • September 4, 2024: Published a summary of activities that we are engaging in to protect our platform during the 2024 US election

Previous Updates

Jan - July 2024

June, 2024

  • We disrupted two covert influence operations, removed 2,824 accounts associated with previously disrupted networks attempting to re-establish their presence within this reporting period (Learn more).


May, 2024

  • We disrupted seven covert influence operations, and removed 3,183 accounts associated with previously disrupted networks attempting to re-establish their presence within this reporting period. This includes a covert influence network that operated from China and targeted a US audience. The individuals behind this network created inauthentic accounts in order to artificially amplify narratives that the US is corrupt and unsafe. Accounts within the network utilized audio originally produced on other platforms such as news broadcasts or podcasts (Learn more).


May 24:

    • We expanded our state affiliated media policies so that when we identify accounts attempting to reach communities outside of their home country on current global events and affairs, they will become ineligible to appear in the For You feed. In addition, state-affiliated media accounts that advertise on our platform will only be able to advertise to audiences in the market where their parent entity is registered. Learn more.
    • We introduced a new dedicated Transparency Report on covert influence operations. We've long reported covert influence disruptions in our quarterly Community Guidelines Enforcement Report. The new report is updated more frequently, and includes more information about operations that attempted to return to our platform with new accounts after we previously removed them.


May 9

  • We became the first social media or video sharing platform to implement the Coalition for Content Provenance and Authenticity's Content Credentials technology, which enables us to automatically label some AI-generated content that was made on other platforms. Learn more


April 2024

  • We disrupted five covert influence operations, and removed 3,263 accounts associated with previously disrupted networks attempting to re-establish their presence within this reporting period (Learn more).


April 17

  • We introduced updates to our Community Guidelines that further clarify our rules along with new features that help creators learn our policies and check their account status. (Learn more)


March 2024

  • We disrupted two covert influence operations, and removed 168 accounts associated with previously disrupted networks attempting to re-establish their presence within this reporting period (Learn more).


February 2024

  • We disrupted seven covert influence operations, and removed 12,127 accounts associated with previously disrupted networks attempting to re-establish their presence within this reporting period (Learn more).
  • We disrupted a network with 110,161 followers that operated from China and targeted a US audience with accounts promoting Chinese policy and culture. The individuals behind this network created inauthentic accounts in order to artificially amplify positive narratives of China, including support for the People’s Republic of China (PRC) policy decisions and strategic objectives, as well as general promotion of Chinese culture. This network utilized accounts impersonating high-profile US creators and celebrities in an attempt to build an audience.
  • We disrupted a network with 116,612 followers that operated from Iran and targeted audiences in the US and UK. Prior to October 2023, the individuals behind this network created inauthentic identities and used inauthentic means to gain user engagement on narratives surrounding UK domestic policy discourse. After October 2023, the network operator used the same inauthentic accounts to target the war between Hamas and Israel and artificially amplify pro-Iranian narratives and narratives critical of the US and Israel. The accounts in the network initially posted content associated with travel and tourism in order to build an audience, before switching to political topics.


February 16

  • We joined forces with 19 leading global tech companies to sign a new pledge to combat deceptive AI-generated election content. These commitments build on our continued work to advance AI transparency, combat misinformation and protect elections globally.


January 2024

  • We disrupted one covert influence operation, and removed 2,358 accounts associated with previously disrupted networks attempting to re-establish their presence within this reporting period (Learn more). (Learn more).

Our Policies

We have Community Guidelines to create a welcoming, safe, and entertaining experience. The guidelines apply to everyone and everything on our platform. They include rules for what is allowed on TikTok, as well as standards for what is eligible for the For You feed (FYF). Here you can find a list of Community Guidelines that help keep us protect the integrity of our elections.

Civic and Election Integrity

Elections are important events and are often the subject of intense discussion and analysis. We try to balance enabling these discussions, while also being a place that brings people together and does not cause division.

We do not allow misinformation or content about civic and electoral processes that may result in voter interference, disrupt the peaceful transfer of power, or lead to off-platform violence. That includes:

  • Election misinformation about:
    • How, when, and where to vote or register to vote
    • Eligibility requirements of voters to participate in an election, and the qualifications for candidates to run for office
    • Laws, processes, and procedures that govern the organization and implementation of elections and other civic processes, such as referendums, ballot propositions, or censuses
    • Final results or outcome of an election
  • Promoting or providing instruction on illegal participation and electoral interference, including intimidation of voters, election workers, and electoral observers
  • Calling for the disruption of a legitimate outcome of an election outside of the legal system, such as through a coup

Content may be ineligible for the FYF if it contains misinformation that can hinder the ability of a voter to make an informed decision. To be cautious, unverified claims about an election and content temporarily under review by fact-checkers may also be ineligible for the FYF. That includes:

  • Unverified claims about an election, such as a premature claim that all ballots have been counted or tallied
  • Statements that significantly misrepresent authoritative civic information, such as a false claim about the text of a parliamentary bill

We do not allow paid political promotion, political advertising, or fundraising by politicians and political parties (for themselves or others). Our political advertising policy includes both traditional paid advertisements and creators receiving compensation to support or oppose a candidate for office. That includes:

  • Solicitations for campaign fundraising by GPPPAs
  • Content like a video from a politician asking for donations, or a political party directing people to a donation page on their website.
  • The use of promotional tools available on the platform, like Promote or TikTok Shop. Accounts we identify as belonging to politicians and political parties have their access to advertising features turned off.
Misinformation

We do not allow misinformation that may cause significant harm to individuals or society, regardless of intent. Our policies apply to both intentional “disinformation” as well as harmful misinformation that may not have been shared with the goal of deceiving people.

Content is ineligible for the FYF if it contains misinformation that may cause moderate harm, such as certain health content, conspiracy theories, repurposed media, or misrepresented authoritative sources. That includes:

  • False or misleading content regarding the treatment or prevention of injuries, conditions, or illnesses that are not immediate or life-threatening.
  • Beliefs about unexplained events, or involve rejecting generally accepted explanations for events, including suggesting they were carried out by covert or powerful individuals or groups.
  • Unedited media content that is presented out of context and may mislead a person about a developing topic of public importance.
  • Content that promotes misleading correlations or conclusions related to authoritative information that is recognized and trusted, such as reports from research institutions

To help users manage their TikTok experience, we may apply warning labels to content that has been assessed by our fact-checking partners and cannot be verified as accurate. We may also send prompts to reconsider sharing such content, or make some information ineligible for the For You feed, such as:

  • Conspiracy theories that are unfounded and claim that certain events or situations are carried out by covert or powerful groups, such as "the government" or a "secret society"
  • Moderate harm health misinformation, such as an unproven recommendation for how to treat a minor illness
  • Repurposed media, such as showing a crowd at a music concert and suggesting it is a political protest
  • Misrepresenting authoritative sources, such as selectively referencing certain scientific data to support a conclusion that is counter to the findings of the study
  • Unverified claims related to an emergency or unfolding event
  • Potential high-harm misinformation while it is undergoing a fact-checking review


To be cautious, unverified information about emergencies and content temporarily under review by fact-checkers is also ineligible for the FYF.

Hate Speech and Hateful Behavior

TikTok is enriched by the diversity of our community. Our differences should be embraced, rather than a cause for division. We want users to share what inspires them, but TikTok is not a place to spread beliefs or propaganda that encourage violence or hate, and we do not allow the presence of violent and hateful organizations or individuals on our platform.

We do not allow any hate speech, hateful behavior, promotion of hateful ideologies, which we classify as systems of beliefs that exclude, oppress, or otherwise discriminate against individuals based on their protected attributes. That includes:

  • Claiming supremacy over a protected group, such as racial supremacy, misogyny, anti-LGBTQ+, antisemitism, or Islamophobia
  • Making conspiratorial statements that target a protected group, such as supporting the Great Replacement Theory or saying that Jewish people control the media
  • Using associated symbols and images
  • Facilitating the trade or marketing of any items that promote hate speech or hateful ideologies, such as books or clothing with hateful logos


We do not allow attacks against protected attributes, which mean personal characteristics that you are either born with, are immutable, or it would cause severe psychological harm if you were forced to change them or were attacked because of them. That includes:

  • Caste
  • Ethnicity
  • National Origin
  • Race
  • Religion
  • Tribe
  • Immigration Status
  • Gender
  • Gender Identity
  • Sex
  • Sexual Orientation
  • Disability
  • Serious Disease


We do not allow anyone to promote or provide material support to violent or hateful actors. That includes:

  • Hateful organizations
  • Individuals who cause serial or mass violence, or promote hateful ideologies
  • Violent criminal organizations
  • Violent extremists
AI-generated content

We welcome the creativity that new artificial intelligence (AI) and other digital technologies may unlock. However, AI and other digital editing technologies can make it difficult to tell the difference between fact and fiction, which may mislead individuals or harm society.

We prohibit AI-generated content that contains the likeness of any real private figure, including anyone under 18, as well as synthetic media of public figures if the content is used for endorsements or violates any other policy. That includes:

  • The likeness of adult private figures, if we become aware it was used without their permission
  • A public figure who is:
    • being degraded or harassed, or engaging in criminal or anti-social behavior
    • taking a position on a political issue, commercial product, or a matter of public importance (such as an elections)
    • being politically endorsed or condemned by an individual or group

We require TikTok effects that are significantly edited with AI to include “AI” in their name and corresponding effects label. That includes:

  • Images of real people that may show highly realistic-appearing scenes or use a particular artistic style, such as a painting, cartoons, or anime.
  • Content that shows people doing something they did not do, saying something they did not say, or altering their appearance in a way that makes them difficult to recognize or identify.
  • Certain face filters, or an animation of an individual.
  • Content that uses are scenes, using images, video, or audio, that would lead someone to believe that the person shown is real or the event took place in the real world, such as a scene that is shown in the style or quality of a photograph or video.

We do not allow content that shares or shows fake authoritative sources or crisis events, or falsely shows public figures in certain contexts.

Government, Politician, and Political Party Accounts (GPPPAs)

We recognize that some members of our community—namely governments, politicians, and political parties—play important and high-profile roles in civic processes and civil society. Our account policies for Government, Politician, and Political Parties seek to strike a balance between enabling people to engage with their content while also protecting our community from being exposed to harmful content.

We're also committed to providing a safe and secure environment for everyone in our community. For further protection, we require Government, Politician, and Political Parties in the U.S. to turn on 2-step verification, and we recommend that these types of accounts, everywhere, use this extra layer of security. Accounts belonging to politicians and political parties do not have access to advertising features, and neither these accounts nor accounts belonging to governments have access to our monetization features, such as gifting or tipping.

In the US, we require that accounts belonging to governments, politicians, and political parties be verified. As part of this, we enforce different restrictions on their accounts, which includes:

  • News entities, governments, politicians, and political party accounts all play important roles in civic processes and civil society. While we treat their content just like any other account and remove violations, we approach account-level enforcement differently to align with our commitment to respecting human rights and free expression.
  • These public interest accounts will be banned for any single severe content violation, such as threatening violence. For repeated content violations that are less severe, they will be temporarily ineligible to appear in the FYF and in the feeds of their followers. In limited circumstances, they may also be temporarily restricted from posting new content. Learn more about our approach to public interest accounts.

We have additional restrictions we can impose on these public interest accounts if they present a particularly high risk to public safety—such as during periods of civil unrest, elections, or other high-risk social and political environments. This includes:

  • If an account promotes violence, hate or misinformation during high-risk contexts, we may stop them from being able to post content for a period of 7 to 30 days, depending on the severity of the violation and associated risk.
  • We can extend the period if we think the account is unlikely to change their behavior, and we can consider their actions outside of TikTok in our decision too.

On occasion, we also apply a public interest exception to some content that would ordinarily violate our Community Guidelines, but which may be in the public interest to view because it appears in a documentary, educational, satirical, or counter-speech context.

Prohibiting paid promotion of political content

We’ve long prohibited paid promotion of political content on TikTok. This is because we don’t believe that this kind of advertising is conducive to the inclusive, authentic and creative experience we strive to provide for our community. Specifically, we don’t allow paid political promotion, political advertising, or fundraising by politicians and political parties, including both traditional paid ads or creators receiving compensation to support or oppose a candidate for office. Accounts belonging to politicians and political parties do not have access to advertising features, and neither these accounts nor accounts belonging to governments have access to our monetization features, such as gifting or tipping.

Violent and Criminal Behavior

We are committed to bringing people together in a way that does not lead to physical conflict, because we recognize that online content related to violence can cause real-world harm. We do not allow any violent threats, promotion of violence, incitement to violence, or promotion of criminal activities that may harm people, animals, or property.

If there is a specific, credible, and imminent threat to human life or serious physical injury, we report it to relevant law enforcement authorities.

We do not allow content that is threatening or that expresses a desire to cause physical injury to a person or a group. This includes:

  • Promoting or inciting violence, such as encouraging an attack or others to attack, praising a violent act, or recommending people bring weapons to a location to intimidate others
  • Promoting theft, or the destruction of property or the natural environment
  • Providing instructions on how to commit criminal activities that may harm people, animals, or property

Media Literacy Features

They key features wil help us protect TikTok's safety and integrity through the 2024 US Presidential elections

Election Center + Search Banner: An in-app banner pointing viewers to our election guide on content with unverifiable claims about voting, premature declarations of victory, or attempts to dissuade people from voting by exploiting COVID-19 as a voter suppression tactic.

Verified Badge: Confirms the account belongs to the person or brand it represents. This means you can be sure to know that the verified accounts you're following are exactly who they say they are, rather than a parody or fan account.

State-affiliated media: Identifies accounts run by entities whose editorial output or decision-making process is subject to control or influence by a government to ensure people have accurate, transparent, and actionable context when they engage with content from media accounts that may present the viewpoint of a government.

Unverified labels: Warning labels to content that has been assessed by our fact-checking partners and cannot be verified as accurate

AIGC Labels: Identifies content generated with Artificial Intelligence